00:00:00.001 Started by upstream project "autotest-per-patch" build number 121254 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.071 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.072 The recommended git tool is: git 00:00:00.072 using credential 00000000-0000-0000-0000-000000000002 00:00:00.081 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.107 Fetching changes from the remote Git repository 00:00:00.110 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.133 Using shallow fetch with depth 1 00:00:00.133 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.133 > git --version # timeout=10 00:00:00.157 > git --version # 'git version 2.39.2' 00:00:00.157 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.157 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.157 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.697 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.710 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.723 Checking out Revision e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f (FETCH_HEAD) 00:00:05.723 > git config core.sparsecheckout # timeout=10 00:00:05.736 > git read-tree -mu HEAD # timeout=10 00:00:05.754 > git checkout -f e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f # timeout=5 00:00:05.774 Commit message: "jenkins/reset: add APC-C14 and APC-C18" 00:00:05.775 > git rev-list --no-walk e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f # timeout=10 00:00:05.893 [Pipeline] Start of Pipeline 00:00:05.907 [Pipeline] library 00:00:05.909 Loading library shm_lib@master 00:00:05.909 Library shm_lib@master is cached. Copying from home. 00:00:05.928 [Pipeline] node 00:00:05.943 Running on GP2 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.946 [Pipeline] { 00:00:05.958 [Pipeline] catchError 00:00:05.960 [Pipeline] { 00:00:05.974 [Pipeline] wrap 00:00:05.984 [Pipeline] { 00:00:05.990 [Pipeline] stage 00:00:05.992 [Pipeline] { (Prologue) 00:00:06.182 [Pipeline] sh 00:00:06.459 + logger -p user.info -t JENKINS-CI 00:00:06.475 [Pipeline] echo 00:00:06.476 Node: GP2 00:00:06.485 [Pipeline] sh 00:00:06.782 [Pipeline] setCustomBuildProperty 00:00:06.796 [Pipeline] echo 00:00:06.798 Cleanup processes 00:00:06.804 [Pipeline] sh 00:00:07.085 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.085 2979633 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.099 [Pipeline] sh 00:00:07.378 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.378 ++ grep -v 'sudo pgrep' 00:00:07.378 ++ awk '{print $1}' 00:00:07.378 + sudo kill -9 00:00:07.378 + true 00:00:07.393 [Pipeline] cleanWs 00:00:07.402 [WS-CLEANUP] Deleting project workspace... 00:00:07.402 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.407 [WS-CLEANUP] done 00:00:07.412 [Pipeline] setCustomBuildProperty 00:00:07.426 [Pipeline] sh 00:00:07.704 + sudo git config --global --replace-all safe.directory '*' 00:00:07.776 [Pipeline] nodesByLabel 00:00:07.778 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.789 [Pipeline] httpRequest 00:00:07.795 HttpMethod: GET 00:00:07.795 URL: http://10.211.164.96/packages/jbp_e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f.tar.gz 00:00:07.797 Sending request to url: http://10.211.164.96/packages/jbp_e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f.tar.gz 00:00:07.810 Response Code: HTTP/1.1 200 OK 00:00:07.811 Success: Status code 200 is in the accepted range: 200,404 00:00:07.811 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f.tar.gz 00:00:10.973 [Pipeline] sh 00:00:11.252 + tar --no-same-owner -xf jbp_e004de56cb2c6b45ae79dfc6c1e79cfd5c84ce1f.tar.gz 00:00:11.270 [Pipeline] httpRequest 00:00:11.275 HttpMethod: GET 00:00:11.275 URL: http://10.211.164.96/packages/spdk_7f48663afd798512b0ad2d8edc216f617ce0687b.tar.gz 00:00:11.275 Sending request to url: http://10.211.164.96/packages/spdk_7f48663afd798512b0ad2d8edc216f617ce0687b.tar.gz 00:00:11.289 Response Code: HTTP/1.1 200 OK 00:00:11.289 Success: Status code 200 is in the accepted range: 200,404 00:00:11.290 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_7f48663afd798512b0ad2d8edc216f617ce0687b.tar.gz 00:00:45.385 [Pipeline] sh 00:00:45.672 + tar --no-same-owner -xf spdk_7f48663afd798512b0ad2d8edc216f617ce0687b.tar.gz 00:00:48.978 [Pipeline] sh 00:00:49.263 + git -C spdk log --oneline -n5 00:00:49.263 7f48663af test/raid: remove unnecessary recreating of base bdevs 00:00:49.263 262776408 raid: keep raid bdev in CONFIGURING state when last base bdev is removed 00:00:49.263 fb3a5d5e5 raid: allow re-adding base bdev when in CONFIGURING state 00:00:49.263 9e7e51f3b raid: limit the no superblock examine case 00:00:49.263 8ecfc6bc0 raid: validate base bdev slot number when parsing superblock 00:00:49.276 [Pipeline] } 00:00:49.298 [Pipeline] // stage 00:00:49.307 [Pipeline] stage 00:00:49.309 [Pipeline] { (Prepare) 00:00:49.328 [Pipeline] writeFile 00:00:49.344 [Pipeline] sh 00:00:49.624 + logger -p user.info -t JENKINS-CI 00:00:49.638 [Pipeline] sh 00:00:49.952 + logger -p user.info -t JENKINS-CI 00:00:49.966 [Pipeline] sh 00:00:50.250 + cat autorun-spdk.conf 00:00:50.250 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.250 SPDK_TEST_NVMF=1 00:00:50.250 SPDK_TEST_NVME_CLI=1 00:00:50.250 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.250 SPDK_TEST_NVMF_NICS=e810 00:00:50.250 SPDK_TEST_VFIOUSER=1 00:00:50.250 SPDK_RUN_UBSAN=1 00:00:50.250 NET_TYPE=phy 00:00:50.258 RUN_NIGHTLY=0 00:00:50.263 [Pipeline] readFile 00:00:50.287 [Pipeline] withEnv 00:00:50.289 [Pipeline] { 00:00:50.303 [Pipeline] sh 00:00:50.586 + set -ex 00:00:50.586 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:50.586 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:50.586 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.586 ++ SPDK_TEST_NVMF=1 00:00:50.586 ++ SPDK_TEST_NVME_CLI=1 00:00:50.586 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.586 ++ SPDK_TEST_NVMF_NICS=e810 00:00:50.586 ++ SPDK_TEST_VFIOUSER=1 00:00:50.586 ++ SPDK_RUN_UBSAN=1 00:00:50.586 ++ NET_TYPE=phy 00:00:50.586 ++ RUN_NIGHTLY=0 00:00:50.586 + case $SPDK_TEST_NVMF_NICS in 00:00:50.586 + DRIVERS=ice 00:00:50.586 + [[ tcp == \r\d\m\a ]] 00:00:50.586 + [[ -n ice ]] 00:00:50.586 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:50.586 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:50.586 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:50.586 rmmod: ERROR: Module irdma is not currently loaded 00:00:50.586 rmmod: ERROR: Module i40iw is not currently loaded 00:00:50.586 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:50.586 + true 00:00:50.586 + for D in $DRIVERS 00:00:50.586 + sudo modprobe ice 00:00:50.586 + exit 0 00:00:50.597 [Pipeline] } 00:00:50.615 [Pipeline] // withEnv 00:00:50.621 [Pipeline] } 00:00:50.640 [Pipeline] // stage 00:00:50.650 [Pipeline] catchError 00:00:50.652 [Pipeline] { 00:00:50.667 [Pipeline] timeout 00:00:50.668 Timeout set to expire in 40 min 00:00:50.669 [Pipeline] { 00:00:50.684 [Pipeline] stage 00:00:50.687 [Pipeline] { (Tests) 00:00:50.704 [Pipeline] sh 00:00:50.988 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:50.988 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:50.988 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:50.988 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:50.988 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:50.988 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:50.988 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:50.988 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:50.988 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:50.988 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:50.988 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:50.988 + source /etc/os-release 00:00:50.988 ++ NAME='Fedora Linux' 00:00:50.988 ++ VERSION='38 (Cloud Edition)' 00:00:50.988 ++ ID=fedora 00:00:50.988 ++ VERSION_ID=38 00:00:50.988 ++ VERSION_CODENAME= 00:00:50.988 ++ PLATFORM_ID=platform:f38 00:00:50.988 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:50.988 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:50.988 ++ LOGO=fedora-logo-icon 00:00:50.988 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:50.988 ++ HOME_URL=https://fedoraproject.org/ 00:00:50.988 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:50.988 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:50.988 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:50.988 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:50.988 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:50.988 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:50.988 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:50.988 ++ SUPPORT_END=2024-05-14 00:00:50.988 ++ VARIANT='Cloud Edition' 00:00:50.988 ++ VARIANT_ID=cloud 00:00:50.988 + uname -a 00:00:50.988 Linux spdk-gp-02 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:50.988 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:51.928 Hugepages 00:00:51.928 node hugesize free / total 00:00:51.928 node0 1048576kB 0 / 0 00:00:51.928 node0 2048kB 0 / 0 00:00:51.928 node1 1048576kB 0 / 0 00:00:51.928 node1 2048kB 0 / 0 00:00:51.928 00:00:51.928 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:51.928 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:00:51.928 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:00:51.928 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:00:51.928 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:00:51.928 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:00:51.928 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:00:51.928 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:00:51.928 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:00:51.928 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:00:51.928 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:00:51.928 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:00:51.928 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:00:51.928 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:00:51.928 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:00:51.928 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:00:51.928 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:00:51.928 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:51.928 + rm -f /tmp/spdk-ld-path 00:00:51.928 + source autorun-spdk.conf 00:00:51.928 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.928 ++ SPDK_TEST_NVMF=1 00:00:51.928 ++ SPDK_TEST_NVME_CLI=1 00:00:51.928 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:51.928 ++ SPDK_TEST_NVMF_NICS=e810 00:00:51.928 ++ SPDK_TEST_VFIOUSER=1 00:00:51.928 ++ SPDK_RUN_UBSAN=1 00:00:51.928 ++ NET_TYPE=phy 00:00:51.928 ++ RUN_NIGHTLY=0 00:00:51.928 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:51.928 + [[ -n '' ]] 00:00:51.928 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:51.928 + for M in /var/spdk/build-*-manifest.txt 00:00:51.928 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:51.928 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:51.928 + for M in /var/spdk/build-*-manifest.txt 00:00:51.928 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:51.928 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:51.928 ++ uname 00:00:51.928 + [[ Linux == \L\i\n\u\x ]] 00:00:51.928 + sudo dmesg -T 00:00:51.928 + sudo dmesg --clear 00:00:52.186 + dmesg_pid=2980194 00:00:52.186 + [[ Fedora Linux == FreeBSD ]] 00:00:52.186 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:52.186 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:52.186 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:52.186 + sudo dmesg -Tw 00:00:52.186 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:52.186 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:52.186 + [[ -x /usr/src/fio-static/fio ]] 00:00:52.186 + export FIO_BIN=/usr/src/fio-static/fio 00:00:52.186 + FIO_BIN=/usr/src/fio-static/fio 00:00:52.186 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:52.186 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:52.186 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:52.186 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:52.186 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:52.186 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:52.186 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:52.186 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:52.186 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:52.186 Test configuration: 00:00:52.186 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.186 SPDK_TEST_NVMF=1 00:00:52.186 SPDK_TEST_NVME_CLI=1 00:00:52.186 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.186 SPDK_TEST_NVMF_NICS=e810 00:00:52.186 SPDK_TEST_VFIOUSER=1 00:00:52.186 SPDK_RUN_UBSAN=1 00:00:52.186 NET_TYPE=phy 00:00:52.186 RUN_NIGHTLY=0 14:05:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:52.186 14:05:33 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:52.187 14:05:33 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:52.187 14:05:33 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:52.187 14:05:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:52.187 14:05:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:52.187 14:05:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:52.187 14:05:33 -- paths/export.sh@5 -- $ export PATH 00:00:52.187 14:05:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:52.187 14:05:33 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:52.187 14:05:33 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:52.187 14:05:33 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714133133.XXXXXX 00:00:52.187 14:05:33 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714133133.KWc2fy 00:00:52.187 14:05:33 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:52.187 14:05:33 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:52.187 14:05:33 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:52.187 14:05:33 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:52.187 14:05:33 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:52.187 14:05:33 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:52.187 14:05:33 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:52.187 14:05:33 -- common/autotest_common.sh@10 -- $ set +x 00:00:52.187 14:05:33 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:52.187 14:05:33 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:52.187 14:05:33 -- pm/common@17 -- $ local monitor 00:00:52.187 14:05:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:52.187 14:05:33 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2980229 00:00:52.187 14:05:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:52.187 14:05:33 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2980231 00:00:52.187 14:05:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:52.187 14:05:33 -- pm/common@21 -- $ date +%s 00:00:52.187 14:05:33 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2980233 00:00:52.187 14:05:33 -- pm/common@21 -- $ date +%s 00:00:52.187 14:05:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:52.187 14:05:33 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2980237 00:00:52.187 14:05:33 -- pm/common@26 -- $ sleep 1 00:00:52.187 14:05:33 -- pm/common@21 -- $ date +%s 00:00:52.187 14:05:33 -- pm/common@21 -- $ date +%s 00:00:52.187 14:05:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714133133 00:00:52.187 14:05:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714133133 00:00:52.187 14:05:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714133133 00:00:52.187 14:05:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714133133 00:00:52.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714133133_collect-vmstat.pm.log 00:00:52.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714133133_collect-bmc-pm.bmc.pm.log 00:00:52.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714133133_collect-cpu-load.pm.log 00:00:52.187 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714133133_collect-cpu-temp.pm.log 00:00:53.124 14:05:34 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:53.124 14:05:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:53.124 14:05:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:53.124 14:05:34 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:53.124 14:05:34 -- spdk/autobuild.sh@16 -- $ date -u 00:00:53.124 Fri Apr 26 12:05:34 PM UTC 2024 00:00:53.124 14:05:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:53.124 v24.05-pre-441-g7f48663af 00:00:53.124 14:05:34 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:53.124 14:05:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:53.124 14:05:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:53.124 14:05:34 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:53.124 14:05:34 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:53.124 14:05:34 -- common/autotest_common.sh@10 -- $ set +x 00:00:53.383 ************************************ 00:00:53.383 START TEST ubsan 00:00:53.383 ************************************ 00:00:53.383 14:05:34 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:53.383 using ubsan 00:00:53.383 00:00:53.383 real 0m0.000s 00:00:53.383 user 0m0.000s 00:00:53.383 sys 0m0.000s 00:00:53.383 14:05:34 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:53.383 14:05:34 -- common/autotest_common.sh@10 -- $ set +x 00:00:53.383 ************************************ 00:00:53.383 END TEST ubsan 00:00:53.383 ************************************ 00:00:53.383 14:05:34 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:53.383 14:05:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:53.383 14:05:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:53.383 14:05:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:53.383 14:05:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:53.383 14:05:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:53.384 14:05:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:53.384 14:05:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:53.384 14:05:34 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:53.384 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:53.384 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:53.642 Using 'verbs' RDMA provider 00:01:04.184 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:16.418 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:16.418 Creating mk/config.mk...done. 00:01:16.418 Creating mk/cc.flags.mk...done. 00:01:16.418 Type 'make' to build. 00:01:16.418 14:05:56 -- spdk/autobuild.sh@69 -- $ run_test make make -j32 00:01:16.418 14:05:56 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:16.418 14:05:56 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:16.418 14:05:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.418 ************************************ 00:01:16.418 START TEST make 00:01:16.418 ************************************ 00:01:16.418 14:05:56 -- common/autotest_common.sh@1111 -- $ make -j32 00:01:16.418 make[1]: Nothing to be done for 'all'. 00:01:16.683 The Meson build system 00:01:16.683 Version: 1.3.1 00:01:16.683 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:16.683 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:16.683 Build type: native build 00:01:16.683 Project name: libvfio-user 00:01:16.683 Project version: 0.0.1 00:01:16.683 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:16.683 C linker for the host machine: cc ld.bfd 2.39-16 00:01:16.683 Host machine cpu family: x86_64 00:01:16.683 Host machine cpu: x86_64 00:01:16.683 Run-time dependency threads found: YES 00:01:16.683 Library dl found: YES 00:01:16.683 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:16.683 Run-time dependency json-c found: YES 0.17 00:01:16.683 Run-time dependency cmocka found: YES 1.1.7 00:01:16.683 Program pytest-3 found: NO 00:01:16.683 Program flake8 found: NO 00:01:16.683 Program misspell-fixer found: NO 00:01:16.683 Program restructuredtext-lint found: NO 00:01:16.683 Program valgrind found: YES (/usr/bin/valgrind) 00:01:16.683 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:16.683 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:16.683 Compiler for C supports arguments -Wwrite-strings: YES 00:01:16.683 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:16.683 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:16.683 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:16.683 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:16.683 Build targets in project: 8 00:01:16.683 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:16.683 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:16.683 00:01:16.683 libvfio-user 0.0.1 00:01:16.683 00:01:16.683 User defined options 00:01:16.683 buildtype : debug 00:01:16.683 default_library: shared 00:01:16.683 libdir : /usr/local/lib 00:01:16.683 00:01:16.683 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:17.253 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:17.522 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:17.522 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:17.522 [3/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:17.522 [4/37] Compiling C object samples/null.p/null.c.o 00:01:17.522 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:17.522 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:17.522 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:17.522 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:17.522 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:17.522 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:17.522 [11/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:17.522 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:17.522 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:17.522 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:17.785 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:17.785 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:17.785 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:17.785 [18/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:17.785 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:17.785 [20/37] Compiling C object samples/server.p/server.c.o 00:01:17.785 [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:17.786 [22/37] Compiling C object samples/client.p/client.c.o 00:01:17.786 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:17.786 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:17.786 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:17.786 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:17.786 [27/37] Linking target samples/client 00:01:17.786 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:17.786 [29/37] Linking target test/unit_tests 00:01:17.786 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:18.049 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:18.049 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:18.312 [33/37] Linking target samples/server 00:01:18.312 [34/37] Linking target samples/null 00:01:18.312 [35/37] Linking target samples/gpio-pci-idio-16 00:01:18.312 [36/37] Linking target samples/lspci 00:01:18.312 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:18.312 INFO: autodetecting backend as ninja 00:01:18.312 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:18.312 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:18.898 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:18.898 ninja: no work to do. 00:01:25.495 The Meson build system 00:01:25.495 Version: 1.3.1 00:01:25.495 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:25.495 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:25.495 Build type: native build 00:01:25.495 Program cat found: YES (/usr/bin/cat) 00:01:25.495 Project name: DPDK 00:01:25.495 Project version: 23.11.0 00:01:25.495 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:25.495 C linker for the host machine: cc ld.bfd 2.39-16 00:01:25.495 Host machine cpu family: x86_64 00:01:25.495 Host machine cpu: x86_64 00:01:25.495 Message: ## Building in Developer Mode ## 00:01:25.495 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:25.495 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:25.495 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:25.495 Program python3 found: YES (/usr/bin/python3) 00:01:25.495 Program cat found: YES (/usr/bin/cat) 00:01:25.495 Compiler for C supports arguments -march=native: YES 00:01:25.495 Checking for size of "void *" : 8 00:01:25.495 Checking for size of "void *" : 8 (cached) 00:01:25.495 Library m found: YES 00:01:25.495 Library numa found: YES 00:01:25.495 Has header "numaif.h" : YES 00:01:25.495 Library fdt found: NO 00:01:25.495 Library execinfo found: NO 00:01:25.495 Has header "execinfo.h" : YES 00:01:25.495 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:25.495 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:25.495 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:25.495 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:25.495 Run-time dependency openssl found: YES 3.0.9 00:01:25.495 Run-time dependency libpcap found: YES 1.10.4 00:01:25.495 Has header "pcap.h" with dependency libpcap: YES 00:01:25.495 Compiler for C supports arguments -Wcast-qual: YES 00:01:25.495 Compiler for C supports arguments -Wdeprecated: YES 00:01:25.495 Compiler for C supports arguments -Wformat: YES 00:01:25.495 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:25.495 Compiler for C supports arguments -Wformat-security: NO 00:01:25.495 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:25.495 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:25.495 Compiler for C supports arguments -Wnested-externs: YES 00:01:25.495 Compiler for C supports arguments -Wold-style-definition: YES 00:01:25.495 Compiler for C supports arguments -Wpointer-arith: YES 00:01:25.495 Compiler for C supports arguments -Wsign-compare: YES 00:01:25.496 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:25.496 Compiler for C supports arguments -Wundef: YES 00:01:25.496 Compiler for C supports arguments -Wwrite-strings: YES 00:01:25.496 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:25.496 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:25.496 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:25.496 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:25.496 Program objdump found: YES (/usr/bin/objdump) 00:01:25.496 Compiler for C supports arguments -mavx512f: YES 00:01:25.496 Checking if "AVX512 checking" compiles: YES 00:01:25.496 Fetching value of define "__SSE4_2__" : 1 00:01:25.496 Fetching value of define "__AES__" : 1 00:01:25.496 Fetching value of define "__AVX__" : 1 00:01:25.496 Fetching value of define "__AVX2__" : (undefined) 00:01:25.496 Fetching value of define "__AVX512BW__" : (undefined) 00:01:25.496 Fetching value of define "__AVX512CD__" : (undefined) 00:01:25.496 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:25.496 Fetching value of define "__AVX512F__" : (undefined) 00:01:25.496 Fetching value of define "__AVX512VL__" : (undefined) 00:01:25.496 Fetching value of define "__PCLMUL__" : 1 00:01:25.496 Fetching value of define "__RDRND__" : (undefined) 00:01:25.496 Fetching value of define "__RDSEED__" : (undefined) 00:01:25.496 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:25.496 Fetching value of define "__znver1__" : (undefined) 00:01:25.496 Fetching value of define "__znver2__" : (undefined) 00:01:25.496 Fetching value of define "__znver3__" : (undefined) 00:01:25.496 Fetching value of define "__znver4__" : (undefined) 00:01:25.496 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:25.496 Message: lib/log: Defining dependency "log" 00:01:25.496 Message: lib/kvargs: Defining dependency "kvargs" 00:01:25.496 Message: lib/telemetry: Defining dependency "telemetry" 00:01:25.496 Checking for function "getentropy" : NO 00:01:25.496 Message: lib/eal: Defining dependency "eal" 00:01:25.496 Message: lib/ring: Defining dependency "ring" 00:01:25.496 Message: lib/rcu: Defining dependency "rcu" 00:01:25.496 Message: lib/mempool: Defining dependency "mempool" 00:01:25.496 Message: lib/mbuf: Defining dependency "mbuf" 00:01:25.496 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:25.496 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:25.496 Compiler for C supports arguments -mpclmul: YES 00:01:25.496 Compiler for C supports arguments -maes: YES 00:01:25.496 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:25.496 Compiler for C supports arguments -mavx512bw: YES 00:01:25.496 Compiler for C supports arguments -mavx512dq: YES 00:01:25.496 Compiler for C supports arguments -mavx512vl: YES 00:01:25.496 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:25.496 Compiler for C supports arguments -mavx2: YES 00:01:25.496 Compiler for C supports arguments -mavx: YES 00:01:25.496 Message: lib/net: Defining dependency "net" 00:01:25.496 Message: lib/meter: Defining dependency "meter" 00:01:25.496 Message: lib/ethdev: Defining dependency "ethdev" 00:01:25.496 Message: lib/pci: Defining dependency "pci" 00:01:25.496 Message: lib/cmdline: Defining dependency "cmdline" 00:01:25.496 Message: lib/hash: Defining dependency "hash" 00:01:25.496 Message: lib/timer: Defining dependency "timer" 00:01:25.496 Message: lib/compressdev: Defining dependency "compressdev" 00:01:25.496 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:25.496 Message: lib/dmadev: Defining dependency "dmadev" 00:01:25.496 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:25.496 Message: lib/power: Defining dependency "power" 00:01:25.496 Message: lib/reorder: Defining dependency "reorder" 00:01:25.496 Message: lib/security: Defining dependency "security" 00:01:25.496 Has header "linux/userfaultfd.h" : YES 00:01:25.496 Has header "linux/vduse.h" : YES 00:01:25.496 Message: lib/vhost: Defining dependency "vhost" 00:01:25.496 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:25.496 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:25.496 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:25.496 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:25.496 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:25.496 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:25.496 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:25.496 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:25.496 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:25.496 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:25.496 Program doxygen found: YES (/usr/bin/doxygen) 00:01:25.496 Configuring doxy-api-html.conf using configuration 00:01:25.496 Configuring doxy-api-man.conf using configuration 00:01:25.496 Program mandb found: YES (/usr/bin/mandb) 00:01:25.496 Program sphinx-build found: NO 00:01:25.496 Configuring rte_build_config.h using configuration 00:01:25.496 Message: 00:01:25.496 ================= 00:01:25.496 Applications Enabled 00:01:25.496 ================= 00:01:25.496 00:01:25.496 apps: 00:01:25.496 00:01:25.496 00:01:25.496 Message: 00:01:25.496 ================= 00:01:25.496 Libraries Enabled 00:01:25.496 ================= 00:01:25.496 00:01:25.496 libs: 00:01:25.496 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:25.496 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:25.496 cryptodev, dmadev, power, reorder, security, vhost, 00:01:25.496 00:01:25.496 Message: 00:01:25.496 =============== 00:01:25.496 Drivers Enabled 00:01:25.496 =============== 00:01:25.496 00:01:25.496 common: 00:01:25.496 00:01:25.496 bus: 00:01:25.496 pci, vdev, 00:01:25.496 mempool: 00:01:25.496 ring, 00:01:25.496 dma: 00:01:25.496 00:01:25.496 net: 00:01:25.496 00:01:25.496 crypto: 00:01:25.496 00:01:25.496 compress: 00:01:25.496 00:01:25.496 vdpa: 00:01:25.496 00:01:25.496 00:01:25.496 Message: 00:01:25.496 ================= 00:01:25.496 Content Skipped 00:01:25.496 ================= 00:01:25.496 00:01:25.496 apps: 00:01:25.496 dumpcap: explicitly disabled via build config 00:01:25.496 graph: explicitly disabled via build config 00:01:25.496 pdump: explicitly disabled via build config 00:01:25.496 proc-info: explicitly disabled via build config 00:01:25.496 test-acl: explicitly disabled via build config 00:01:25.496 test-bbdev: explicitly disabled via build config 00:01:25.496 test-cmdline: explicitly disabled via build config 00:01:25.496 test-compress-perf: explicitly disabled via build config 00:01:25.496 test-crypto-perf: explicitly disabled via build config 00:01:25.496 test-dma-perf: explicitly disabled via build config 00:01:25.496 test-eventdev: explicitly disabled via build config 00:01:25.496 test-fib: explicitly disabled via build config 00:01:25.496 test-flow-perf: explicitly disabled via build config 00:01:25.496 test-gpudev: explicitly disabled via build config 00:01:25.496 test-mldev: explicitly disabled via build config 00:01:25.496 test-pipeline: explicitly disabled via build config 00:01:25.496 test-pmd: explicitly disabled via build config 00:01:25.496 test-regex: explicitly disabled via build config 00:01:25.496 test-sad: explicitly disabled via build config 00:01:25.496 test-security-perf: explicitly disabled via build config 00:01:25.496 00:01:25.496 libs: 00:01:25.496 metrics: explicitly disabled via build config 00:01:25.496 acl: explicitly disabled via build config 00:01:25.496 bbdev: explicitly disabled via build config 00:01:25.496 bitratestats: explicitly disabled via build config 00:01:25.496 bpf: explicitly disabled via build config 00:01:25.496 cfgfile: explicitly disabled via build config 00:01:25.496 distributor: explicitly disabled via build config 00:01:25.496 efd: explicitly disabled via build config 00:01:25.496 eventdev: explicitly disabled via build config 00:01:25.496 dispatcher: explicitly disabled via build config 00:01:25.496 gpudev: explicitly disabled via build config 00:01:25.496 gro: explicitly disabled via build config 00:01:25.496 gso: explicitly disabled via build config 00:01:25.496 ip_frag: explicitly disabled via build config 00:01:25.496 jobstats: explicitly disabled via build config 00:01:25.496 latencystats: explicitly disabled via build config 00:01:25.496 lpm: explicitly disabled via build config 00:01:25.496 member: explicitly disabled via build config 00:01:25.496 pcapng: explicitly disabled via build config 00:01:25.496 rawdev: explicitly disabled via build config 00:01:25.496 regexdev: explicitly disabled via build config 00:01:25.496 mldev: explicitly disabled via build config 00:01:25.496 rib: explicitly disabled via build config 00:01:25.496 sched: explicitly disabled via build config 00:01:25.496 stack: explicitly disabled via build config 00:01:25.496 ipsec: explicitly disabled via build config 00:01:25.496 pdcp: explicitly disabled via build config 00:01:25.496 fib: explicitly disabled via build config 00:01:25.496 port: explicitly disabled via build config 00:01:25.496 pdump: explicitly disabled via build config 00:01:25.496 table: explicitly disabled via build config 00:01:25.496 pipeline: explicitly disabled via build config 00:01:25.496 graph: explicitly disabled via build config 00:01:25.496 node: explicitly disabled via build config 00:01:25.496 00:01:25.496 drivers: 00:01:25.496 common/cpt: not in enabled drivers build config 00:01:25.496 common/dpaax: not in enabled drivers build config 00:01:25.496 common/iavf: not in enabled drivers build config 00:01:25.496 common/idpf: not in enabled drivers build config 00:01:25.496 common/mvep: not in enabled drivers build config 00:01:25.496 common/octeontx: not in enabled drivers build config 00:01:25.496 bus/auxiliary: not in enabled drivers build config 00:01:25.496 bus/cdx: not in enabled drivers build config 00:01:25.496 bus/dpaa: not in enabled drivers build config 00:01:25.496 bus/fslmc: not in enabled drivers build config 00:01:25.496 bus/ifpga: not in enabled drivers build config 00:01:25.496 bus/platform: not in enabled drivers build config 00:01:25.496 bus/vmbus: not in enabled drivers build config 00:01:25.496 common/cnxk: not in enabled drivers build config 00:01:25.496 common/mlx5: not in enabled drivers build config 00:01:25.496 common/nfp: not in enabled drivers build config 00:01:25.497 common/qat: not in enabled drivers build config 00:01:25.497 common/sfc_efx: not in enabled drivers build config 00:01:25.497 mempool/bucket: not in enabled drivers build config 00:01:25.497 mempool/cnxk: not in enabled drivers build config 00:01:25.497 mempool/dpaa: not in enabled drivers build config 00:01:25.497 mempool/dpaa2: not in enabled drivers build config 00:01:25.497 mempool/octeontx: not in enabled drivers build config 00:01:25.497 mempool/stack: not in enabled drivers build config 00:01:25.497 dma/cnxk: not in enabled drivers build config 00:01:25.497 dma/dpaa: not in enabled drivers build config 00:01:25.497 dma/dpaa2: not in enabled drivers build config 00:01:25.497 dma/hisilicon: not in enabled drivers build config 00:01:25.497 dma/idxd: not in enabled drivers build config 00:01:25.497 dma/ioat: not in enabled drivers build config 00:01:25.497 dma/skeleton: not in enabled drivers build config 00:01:25.497 net/af_packet: not in enabled drivers build config 00:01:25.497 net/af_xdp: not in enabled drivers build config 00:01:25.497 net/ark: not in enabled drivers build config 00:01:25.497 net/atlantic: not in enabled drivers build config 00:01:25.497 net/avp: not in enabled drivers build config 00:01:25.497 net/axgbe: not in enabled drivers build config 00:01:25.497 net/bnx2x: not in enabled drivers build config 00:01:25.497 net/bnxt: not in enabled drivers build config 00:01:25.497 net/bonding: not in enabled drivers build config 00:01:25.497 net/cnxk: not in enabled drivers build config 00:01:25.497 net/cpfl: not in enabled drivers build config 00:01:25.497 net/cxgbe: not in enabled drivers build config 00:01:25.497 net/dpaa: not in enabled drivers build config 00:01:25.497 net/dpaa2: not in enabled drivers build config 00:01:25.497 net/e1000: not in enabled drivers build config 00:01:25.497 net/ena: not in enabled drivers build config 00:01:25.497 net/enetc: not in enabled drivers build config 00:01:25.497 net/enetfec: not in enabled drivers build config 00:01:25.497 net/enic: not in enabled drivers build config 00:01:25.497 net/failsafe: not in enabled drivers build config 00:01:25.497 net/fm10k: not in enabled drivers build config 00:01:25.497 net/gve: not in enabled drivers build config 00:01:25.497 net/hinic: not in enabled drivers build config 00:01:25.497 net/hns3: not in enabled drivers build config 00:01:25.497 net/i40e: not in enabled drivers build config 00:01:25.497 net/iavf: not in enabled drivers build config 00:01:25.497 net/ice: not in enabled drivers build config 00:01:25.497 net/idpf: not in enabled drivers build config 00:01:25.497 net/igc: not in enabled drivers build config 00:01:25.497 net/ionic: not in enabled drivers build config 00:01:25.497 net/ipn3ke: not in enabled drivers build config 00:01:25.497 net/ixgbe: not in enabled drivers build config 00:01:25.497 net/mana: not in enabled drivers build config 00:01:25.497 net/memif: not in enabled drivers build config 00:01:25.497 net/mlx4: not in enabled drivers build config 00:01:25.497 net/mlx5: not in enabled drivers build config 00:01:25.497 net/mvneta: not in enabled drivers build config 00:01:25.497 net/mvpp2: not in enabled drivers build config 00:01:25.497 net/netvsc: not in enabled drivers build config 00:01:25.497 net/nfb: not in enabled drivers build config 00:01:25.497 net/nfp: not in enabled drivers build config 00:01:25.497 net/ngbe: not in enabled drivers build config 00:01:25.497 net/null: not in enabled drivers build config 00:01:25.497 net/octeontx: not in enabled drivers build config 00:01:25.497 net/octeon_ep: not in enabled drivers build config 00:01:25.497 net/pcap: not in enabled drivers build config 00:01:25.497 net/pfe: not in enabled drivers build config 00:01:25.497 net/qede: not in enabled drivers build config 00:01:25.497 net/ring: not in enabled drivers build config 00:01:25.497 net/sfc: not in enabled drivers build config 00:01:25.497 net/softnic: not in enabled drivers build config 00:01:25.497 net/tap: not in enabled drivers build config 00:01:25.497 net/thunderx: not in enabled drivers build config 00:01:25.497 net/txgbe: not in enabled drivers build config 00:01:25.497 net/vdev_netvsc: not in enabled drivers build config 00:01:25.497 net/vhost: not in enabled drivers build config 00:01:25.497 net/virtio: not in enabled drivers build config 00:01:25.497 net/vmxnet3: not in enabled drivers build config 00:01:25.497 raw/*: missing internal dependency, "rawdev" 00:01:25.497 crypto/armv8: not in enabled drivers build config 00:01:25.497 crypto/bcmfs: not in enabled drivers build config 00:01:25.497 crypto/caam_jr: not in enabled drivers build config 00:01:25.497 crypto/ccp: not in enabled drivers build config 00:01:25.497 crypto/cnxk: not in enabled drivers build config 00:01:25.497 crypto/dpaa_sec: not in enabled drivers build config 00:01:25.497 crypto/dpaa2_sec: not in enabled drivers build config 00:01:25.497 crypto/ipsec_mb: not in enabled drivers build config 00:01:25.497 crypto/mlx5: not in enabled drivers build config 00:01:25.497 crypto/mvsam: not in enabled drivers build config 00:01:25.497 crypto/nitrox: not in enabled drivers build config 00:01:25.497 crypto/null: not in enabled drivers build config 00:01:25.497 crypto/octeontx: not in enabled drivers build config 00:01:25.497 crypto/openssl: not in enabled drivers build config 00:01:25.497 crypto/scheduler: not in enabled drivers build config 00:01:25.497 crypto/uadk: not in enabled drivers build config 00:01:25.497 crypto/virtio: not in enabled drivers build config 00:01:25.497 compress/isal: not in enabled drivers build config 00:01:25.497 compress/mlx5: not in enabled drivers build config 00:01:25.497 compress/octeontx: not in enabled drivers build config 00:01:25.497 compress/zlib: not in enabled drivers build config 00:01:25.497 regex/*: missing internal dependency, "regexdev" 00:01:25.497 ml/*: missing internal dependency, "mldev" 00:01:25.497 vdpa/ifc: not in enabled drivers build config 00:01:25.497 vdpa/mlx5: not in enabled drivers build config 00:01:25.497 vdpa/nfp: not in enabled drivers build config 00:01:25.497 vdpa/sfc: not in enabled drivers build config 00:01:25.497 event/*: missing internal dependency, "eventdev" 00:01:25.497 baseband/*: missing internal dependency, "bbdev" 00:01:25.497 gpu/*: missing internal dependency, "gpudev" 00:01:25.497 00:01:25.497 00:01:25.756 Build targets in project: 85 00:01:25.756 00:01:25.756 DPDK 23.11.0 00:01:25.756 00:01:25.756 User defined options 00:01:25.756 buildtype : debug 00:01:25.756 default_library : shared 00:01:25.756 libdir : lib 00:01:25.756 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:25.756 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:25.756 c_link_args : 00:01:25.756 cpu_instruction_set: native 00:01:25.756 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:01:25.756 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:01:25.756 enable_docs : false 00:01:25.756 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:25.756 enable_kmods : false 00:01:25.756 tests : false 00:01:25.756 00:01:25.756 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:26.327 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:26.327 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:26.327 [2/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:26.594 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:26.594 [4/265] Linking static target lib/librte_kvargs.a 00:01:26.594 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:26.594 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:26.594 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:26.594 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:26.594 [9/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:26.594 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:26.594 [11/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:26.594 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:26.594 [13/265] Linking static target lib/librte_log.a 00:01:26.594 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:26.594 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:26.594 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:27.175 [17/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.175 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:27.175 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:27.437 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:27.437 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:27.437 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:27.437 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:27.437 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:27.437 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:27.437 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:27.437 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:27.437 [28/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:27.437 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:27.437 [30/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:27.437 [31/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:27.437 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:27.437 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:27.437 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:27.437 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:27.437 [36/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:27.701 [37/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:27.701 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:27.701 [39/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:27.701 [40/265] Linking static target lib/librte_telemetry.a 00:01:27.701 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:27.701 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:27.701 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:27.701 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:27.701 [45/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:27.701 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:27.701 [47/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:27.701 [48/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:27.701 [49/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:27.701 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:27.701 [51/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.701 [52/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:27.701 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:27.964 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:27.964 [55/265] Linking target lib/librte_log.so.24.0 00:01:27.964 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:27.964 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:27.964 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:28.223 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:28.223 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:28.223 [61/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:28.223 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:28.501 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:28.501 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:28.501 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:28.501 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:28.501 [67/265] Linking target lib/librte_kvargs.so.24.0 00:01:28.501 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:28.501 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:28.501 [70/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:28.501 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:28.762 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:28.762 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:28.762 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:28.762 [75/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:28.762 [76/265] Linking static target lib/librte_ring.a 00:01:28.762 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:28.762 [78/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:28.762 [79/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:28.762 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:28.762 [81/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.762 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:28.762 [83/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:28.763 [84/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:29.024 [85/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:29.024 [86/265] Linking target lib/librte_telemetry.so.24.0 00:01:29.024 [87/265] Linking static target lib/librte_eal.a 00:01:29.024 [88/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:29.024 [89/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:29.024 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:29.024 [91/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:29.288 [92/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:29.288 [93/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:29.288 [94/265] Linking static target lib/librte_rcu.a 00:01:29.288 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:29.288 [96/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:29.288 [97/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:29.288 [98/265] Linking static target lib/librte_pci.a 00:01:29.288 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:29.288 [100/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:29.288 [101/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:29.288 [102/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:29.288 [103/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:29.288 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:29.288 [105/265] Linking static target lib/librte_meter.a 00:01:29.288 [106/265] Linking static target lib/librte_mempool.a 00:01:29.288 [107/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:29.548 [108/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:29.548 [109/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.548 [110/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:29.548 [111/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:29.548 [112/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:29.548 [113/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:29.548 [114/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:29.548 [115/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:29.548 [116/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.548 [117/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:29.548 [118/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.809 [119/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:29.809 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:29.809 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:30.083 [122/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:30.083 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:30.083 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:30.083 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:30.083 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:30.083 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:30.083 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:30.083 [129/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.083 [130/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:30.083 [131/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:30.083 [132/265] Linking static target lib/librte_net.a 00:01:30.083 [133/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:30.342 [134/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:30.342 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:30.342 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:30.342 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:30.342 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:30.342 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:30.603 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:30.603 [141/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:30.603 [142/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:30.603 [143/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.603 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:30.603 [145/265] Linking static target lib/librte_cmdline.a 00:01:30.603 [146/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:30.865 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:30.865 [148/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.865 [149/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:30.865 [150/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:30.865 [151/265] Linking static target lib/librte_mbuf.a 00:01:30.865 [152/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:30.865 [153/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:30.865 [154/265] Linking static target lib/librte_timer.a 00:01:31.128 [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:31.385 [156/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:31.385 [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:31.385 [158/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:31.385 [159/265] Linking static target lib/librte_compressdev.a 00:01:31.385 [160/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:31.385 [161/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:31.385 [162/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:31.385 [163/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:31.385 [164/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:31.385 [165/265] Linking static target lib/librte_dmadev.a 00:01:31.385 [166/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:31.645 [167/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:31.645 [168/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:31.645 [169/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:31.645 [170/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:31.645 [171/265] Linking static target lib/librte_hash.a 00:01:31.645 [172/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:31.645 [173/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:31.645 [174/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:31.645 [175/265] Linking static target lib/librte_power.a 00:01:31.645 [176/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.904 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:31.904 [178/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:31.904 [179/265] Linking static target lib/librte_reorder.a 00:01:31.904 [180/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.904 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:31.904 [182/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.166 [183/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:32.166 [184/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.166 [185/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.166 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:32.166 [187/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:32.166 [188/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:32.166 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:32.166 [190/265] Linking static target lib/librte_security.a 00:01:32.166 [191/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.424 [192/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:32.424 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:32.424 [194/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.424 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:32.424 [196/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:32.424 [197/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:32.424 [198/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:32.424 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:32.424 [200/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.683 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:32.683 [202/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:32.683 [203/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:32.683 [204/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:32.683 [205/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:32.683 [206/265] Linking static target drivers/librte_bus_vdev.a 00:01:32.683 [207/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:32.683 [208/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:32.683 [209/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:32.683 [210/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.683 [211/265] Linking static target lib/librte_ethdev.a 00:01:32.683 [212/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:32.683 [213/265] Linking static target lib/librte_cryptodev.a 00:01:32.683 [214/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:32.683 [215/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:32.683 [216/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:32.942 [217/265] Linking static target drivers/librte_bus_pci.a 00:01:32.942 [218/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.942 [219/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:32.942 [220/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:32.942 [221/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:32.942 [222/265] Linking static target drivers/librte_mempool_ring.a 00:01:33.201 [223/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.767 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.671 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:36.239 [226/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.239 [227/265] Linking target lib/librte_eal.so.24.0 00:01:36.497 [228/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.497 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:36.497 [230/265] Linking target lib/librte_ring.so.24.0 00:01:36.497 [231/265] Linking target lib/librte_meter.so.24.0 00:01:36.497 [232/265] Linking target lib/librte_pci.so.24.0 00:01:36.497 [233/265] Linking target lib/librte_timer.so.24.0 00:01:36.497 [234/265] Linking target lib/librte_dmadev.so.24.0 00:01:36.497 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:36.497 [236/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:36.497 [237/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:36.497 [238/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:36.497 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:36.497 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:36.497 [241/265] Linking target lib/librte_rcu.so.24.0 00:01:36.497 [242/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:36.497 [243/265] Linking target lib/librte_mempool.so.24.0 00:01:36.755 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:36.755 [245/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:36.755 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:36.755 [247/265] Linking target lib/librte_mbuf.so.24.0 00:01:37.013 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:37.013 [249/265] Linking target lib/librte_reorder.so.24.0 00:01:37.013 [250/265] Linking target lib/librte_compressdev.so.24.0 00:01:37.013 [251/265] Linking target lib/librte_net.so.24.0 00:01:37.013 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:01:37.013 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:37.013 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:37.271 [255/265] Linking target lib/librte_security.so.24.0 00:01:37.271 [256/265] Linking target lib/librte_hash.so.24.0 00:01:37.271 [257/265] Linking target lib/librte_cmdline.so.24.0 00:01:37.271 [258/265] Linking target lib/librte_ethdev.so.24.0 00:01:37.271 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:37.271 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:37.529 [261/265] Linking target lib/librte_power.so.24.0 00:01:41.717 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:41.717 [263/265] Linking static target lib/librte_vhost.a 00:01:43.092 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.092 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:43.092 INFO: autodetecting backend as ninja 00:01:43.092 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 32 00:01:44.026 CC lib/ut/ut.o 00:01:44.026 CC lib/log/log.o 00:01:44.026 CC lib/ut_mock/mock.o 00:01:44.026 CC lib/log/log_flags.o 00:01:44.026 CC lib/log/log_deprecated.o 00:01:44.283 LIB libspdk_ut_mock.a 00:01:44.283 SO libspdk_ut_mock.so.6.0 00:01:44.284 LIB libspdk_log.a 00:01:44.284 LIB libspdk_ut.a 00:01:44.284 SO libspdk_ut.so.2.0 00:01:44.284 SO libspdk_log.so.7.0 00:01:44.284 SYMLINK libspdk_ut_mock.so 00:01:44.284 SYMLINK libspdk_ut.so 00:01:44.284 SYMLINK libspdk_log.so 00:01:44.542 CC lib/dma/dma.o 00:01:44.542 CC lib/ioat/ioat.o 00:01:44.542 CXX lib/trace_parser/trace.o 00:01:44.542 CC lib/util/base64.o 00:01:44.542 CC lib/util/bit_array.o 00:01:44.542 CC lib/util/cpuset.o 00:01:44.542 CC lib/util/crc16.o 00:01:44.542 CC lib/util/crc32.o 00:01:44.542 CC lib/util/crc32c.o 00:01:44.542 CC lib/util/crc32_ieee.o 00:01:44.542 CC lib/util/crc64.o 00:01:44.542 CC lib/util/dif.o 00:01:44.542 CC lib/util/fd.o 00:01:44.542 CC lib/util/file.o 00:01:44.542 CC lib/util/hexlify.o 00:01:44.542 CC lib/util/iov.o 00:01:44.542 CC lib/util/pipe.o 00:01:44.542 CC lib/util/math.o 00:01:44.542 CC lib/util/strerror_tls.o 00:01:44.542 CC lib/util/string.o 00:01:44.542 CC lib/util/uuid.o 00:01:44.542 CC lib/util/fd_group.o 00:01:44.542 CC lib/util/xor.o 00:01:44.542 CC lib/util/zipf.o 00:01:44.542 CC lib/vfio_user/host/vfio_user_pci.o 00:01:44.542 CC lib/vfio_user/host/vfio_user.o 00:01:44.801 LIB libspdk_ioat.a 00:01:44.801 SO libspdk_ioat.so.7.0 00:01:44.801 LIB libspdk_dma.a 00:01:44.801 SO libspdk_dma.so.4.0 00:01:44.801 SYMLINK libspdk_ioat.so 00:01:44.801 SYMLINK libspdk_dma.so 00:01:45.060 LIB libspdk_vfio_user.a 00:01:45.060 SO libspdk_vfio_user.so.5.0 00:01:45.060 SYMLINK libspdk_vfio_user.so 00:01:45.318 LIB libspdk_util.a 00:01:45.318 SO libspdk_util.so.9.0 00:01:45.318 SYMLINK libspdk_util.so 00:01:45.582 LIB libspdk_trace_parser.a 00:01:45.582 SO libspdk_trace_parser.so.5.0 00:01:45.582 SYMLINK libspdk_trace_parser.so 00:01:45.582 CC lib/idxd/idxd.o 00:01:45.582 CC lib/idxd/idxd_user.o 00:01:45.582 CC lib/env_dpdk/env.o 00:01:45.582 CC lib/rdma/common.o 00:01:45.582 CC lib/rdma/rdma_verbs.o 00:01:45.582 CC lib/env_dpdk/memory.o 00:01:45.582 CC lib/env_dpdk/pci.o 00:01:45.582 CC lib/env_dpdk/init.o 00:01:45.582 CC lib/env_dpdk/threads.o 00:01:45.582 CC lib/vmd/vmd.o 00:01:45.582 CC lib/conf/conf.o 00:01:45.582 CC lib/vmd/led.o 00:01:45.582 CC lib/env_dpdk/pci_ioat.o 00:01:45.582 CC lib/env_dpdk/pci_virtio.o 00:01:45.582 CC lib/env_dpdk/pci_vmd.o 00:01:45.582 CC lib/env_dpdk/pci_idxd.o 00:01:45.582 CC lib/env_dpdk/pci_event.o 00:01:45.582 CC lib/json/json_parse.o 00:01:45.582 CC lib/json/json_util.o 00:01:45.582 CC lib/env_dpdk/sigbus_handler.o 00:01:45.582 CC lib/env_dpdk/pci_dpdk.o 00:01:45.582 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:45.582 CC lib/json/json_write.o 00:01:45.582 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:45.872 LIB libspdk_conf.a 00:01:45.872 SO libspdk_conf.so.6.0 00:01:45.872 LIB libspdk_rdma.a 00:01:46.166 LIB libspdk_json.a 00:01:46.166 SYMLINK libspdk_conf.so 00:01:46.166 SO libspdk_rdma.so.6.0 00:01:46.166 SO libspdk_json.so.6.0 00:01:46.166 SYMLINK libspdk_rdma.so 00:01:46.166 SYMLINK libspdk_json.so 00:01:46.166 LIB libspdk_idxd.a 00:01:46.166 CC lib/jsonrpc/jsonrpc_server.o 00:01:46.166 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:46.166 CC lib/jsonrpc/jsonrpc_client.o 00:01:46.166 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:46.166 SO libspdk_idxd.so.12.0 00:01:46.426 SYMLINK libspdk_idxd.so 00:01:46.426 LIB libspdk_vmd.a 00:01:46.426 SO libspdk_vmd.so.6.0 00:01:46.426 SYMLINK libspdk_vmd.so 00:01:46.426 LIB libspdk_jsonrpc.a 00:01:46.684 SO libspdk_jsonrpc.so.6.0 00:01:46.684 SYMLINK libspdk_jsonrpc.so 00:01:46.684 CC lib/rpc/rpc.o 00:01:46.944 LIB libspdk_rpc.a 00:01:47.202 SO libspdk_rpc.so.6.0 00:01:47.202 SYMLINK libspdk_rpc.so 00:01:47.202 CC lib/keyring/keyring.o 00:01:47.203 CC lib/keyring/keyring_rpc.o 00:01:47.203 CC lib/trace/trace.o 00:01:47.203 CC lib/trace/trace_flags.o 00:01:47.203 CC lib/trace/trace_rpc.o 00:01:47.203 CC lib/notify/notify.o 00:01:47.203 CC lib/notify/notify_rpc.o 00:01:47.461 LIB libspdk_notify.a 00:01:47.461 SO libspdk_notify.so.6.0 00:01:47.461 LIB libspdk_keyring.a 00:01:47.461 SYMLINK libspdk_notify.so 00:01:47.461 LIB libspdk_trace.a 00:01:47.461 SO libspdk_keyring.so.1.0 00:01:47.719 SO libspdk_trace.so.10.0 00:01:47.719 SYMLINK libspdk_keyring.so 00:01:47.719 SYMLINK libspdk_trace.so 00:01:47.719 LIB libspdk_env_dpdk.a 00:01:47.719 SO libspdk_env_dpdk.so.14.0 00:01:47.719 CC lib/sock/sock.o 00:01:47.719 CC lib/sock/sock_rpc.o 00:01:47.719 CC lib/thread/thread.o 00:01:47.719 CC lib/thread/iobuf.o 00:01:47.977 SYMLINK libspdk_env_dpdk.so 00:01:48.235 LIB libspdk_sock.a 00:01:48.235 SO libspdk_sock.so.9.0 00:01:48.235 SYMLINK libspdk_sock.so 00:01:48.494 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:48.494 CC lib/nvme/nvme_ctrlr.o 00:01:48.494 CC lib/nvme/nvme_fabric.o 00:01:48.494 CC lib/nvme/nvme_ns_cmd.o 00:01:48.494 CC lib/nvme/nvme_ns.o 00:01:48.494 CC lib/nvme/nvme_pcie_common.o 00:01:48.494 CC lib/nvme/nvme_pcie.o 00:01:48.494 CC lib/nvme/nvme_qpair.o 00:01:48.494 CC lib/nvme/nvme.o 00:01:48.494 CC lib/nvme/nvme_quirks.o 00:01:48.494 CC lib/nvme/nvme_transport.o 00:01:48.494 CC lib/nvme/nvme_discovery.o 00:01:48.494 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:48.494 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:48.494 CC lib/nvme/nvme_tcp.o 00:01:48.494 CC lib/nvme/nvme_opal.o 00:01:48.494 CC lib/nvme/nvme_io_msg.o 00:01:48.494 CC lib/nvme/nvme_poll_group.o 00:01:48.494 CC lib/nvme/nvme_zns.o 00:01:48.494 CC lib/nvme/nvme_stubs.o 00:01:48.494 CC lib/nvme/nvme_auth.o 00:01:48.494 CC lib/nvme/nvme_cuse.o 00:01:48.494 CC lib/nvme/nvme_vfio_user.o 00:01:48.494 CC lib/nvme/nvme_rdma.o 00:01:49.870 LIB libspdk_thread.a 00:01:49.870 SO libspdk_thread.so.10.0 00:01:49.870 SYMLINK libspdk_thread.so 00:01:49.870 CC lib/vfu_tgt/tgt_endpoint.o 00:01:49.870 CC lib/blob/blobstore.o 00:01:49.870 CC lib/init/json_config.o 00:01:49.870 CC lib/vfu_tgt/tgt_rpc.o 00:01:49.870 CC lib/blob/request.o 00:01:49.870 CC lib/init/subsystem.o 00:01:49.870 CC lib/blob/zeroes.o 00:01:49.870 CC lib/init/subsystem_rpc.o 00:01:49.870 CC lib/blob/blob_bs_dev.o 00:01:49.870 CC lib/init/rpc.o 00:01:49.870 CC lib/accel/accel.o 00:01:49.870 CC lib/virtio/virtio.o 00:01:49.870 CC lib/accel/accel_sw.o 00:01:49.870 CC lib/accel/accel_rpc.o 00:01:49.870 CC lib/virtio/virtio_vhost_user.o 00:01:49.870 CC lib/virtio/virtio_vfio_user.o 00:01:49.870 CC lib/virtio/virtio_pci.o 00:01:50.438 LIB libspdk_init.a 00:01:50.438 SO libspdk_init.so.5.0 00:01:50.438 LIB libspdk_vfu_tgt.a 00:01:50.438 LIB libspdk_virtio.a 00:01:50.438 SYMLINK libspdk_init.so 00:01:50.438 SO libspdk_vfu_tgt.so.3.0 00:01:50.438 SO libspdk_virtio.so.7.0 00:01:50.438 SYMLINK libspdk_vfu_tgt.so 00:01:50.438 SYMLINK libspdk_virtio.so 00:01:50.438 CC lib/event/app.o 00:01:50.438 CC lib/event/reactor.o 00:01:50.438 CC lib/event/log_rpc.o 00:01:50.696 CC lib/event/scheduler_static.o 00:01:50.696 CC lib/event/app_rpc.o 00:01:50.953 LIB libspdk_event.a 00:01:50.953 SO libspdk_event.so.13.0 00:01:50.953 LIB libspdk_accel.a 00:01:50.953 SYMLINK libspdk_event.so 00:01:51.212 SO libspdk_accel.so.15.0 00:01:51.212 SYMLINK libspdk_accel.so 00:01:51.212 CC lib/bdev/bdev.o 00:01:51.212 CC lib/bdev/bdev_rpc.o 00:01:51.212 CC lib/bdev/bdev_zone.o 00:01:51.212 CC lib/bdev/part.o 00:01:51.212 CC lib/bdev/scsi_nvme.o 00:01:51.470 LIB libspdk_nvme.a 00:01:51.470 SO libspdk_nvme.so.13.0 00:01:51.728 SYMLINK libspdk_nvme.so 00:01:53.100 LIB libspdk_blob.a 00:01:53.100 SO libspdk_blob.so.11.0 00:01:53.100 SYMLINK libspdk_blob.so 00:01:53.100 CC lib/blobfs/blobfs.o 00:01:53.100 CC lib/lvol/lvol.o 00:01:53.100 CC lib/blobfs/tree.o 00:01:54.038 LIB libspdk_bdev.a 00:01:54.038 SO libspdk_bdev.so.15.0 00:01:54.038 SYMLINK libspdk_bdev.so 00:01:54.038 CC lib/ublk/ublk.o 00:01:54.038 CC lib/nvmf/ctrlr.o 00:01:54.038 CC lib/nbd/nbd.o 00:01:54.038 CC lib/scsi/dev.o 00:01:54.038 CC lib/nvmf/ctrlr_discovery.o 00:01:54.038 CC lib/nbd/nbd_rpc.o 00:01:54.038 CC lib/scsi/lun.o 00:01:54.038 CC lib/ublk/ublk_rpc.o 00:01:54.038 CC lib/ftl/ftl_core.o 00:01:54.038 CC lib/nvmf/ctrlr_bdev.o 00:01:54.038 LIB libspdk_blobfs.a 00:01:54.038 CC lib/scsi/port.o 00:01:54.038 CC lib/ftl/ftl_init.o 00:01:54.038 CC lib/scsi/scsi.o 00:01:54.038 CC lib/nvmf/subsystem.o 00:01:54.038 CC lib/nvmf/nvmf.o 00:01:54.038 CC lib/scsi/scsi_bdev.o 00:01:54.038 CC lib/ftl/ftl_layout.o 00:01:54.038 CC lib/nvmf/nvmf_rpc.o 00:01:54.038 CC lib/ftl/ftl_debug.o 00:01:54.038 CC lib/scsi/scsi_pr.o 00:01:54.038 CC lib/nvmf/transport.o 00:01:54.038 CC lib/scsi/scsi_rpc.o 00:01:54.038 CC lib/ftl/ftl_io.o 00:01:54.038 CC lib/scsi/task.o 00:01:54.038 CC lib/ftl/ftl_sb.o 00:01:54.038 CC lib/nvmf/tcp.o 00:01:54.038 CC lib/ftl/ftl_l2p.o 00:01:54.038 CC lib/nvmf/vfio_user.o 00:01:54.038 CC lib/ftl/ftl_l2p_flat.o 00:01:54.038 CC lib/ftl/ftl_nv_cache.o 00:01:54.038 SO libspdk_blobfs.so.10.0 00:01:54.299 LIB libspdk_lvol.a 00:01:54.299 SYMLINK libspdk_blobfs.so 00:01:54.299 CC lib/nvmf/rdma.o 00:01:54.299 SO libspdk_lvol.so.10.0 00:01:54.299 SYMLINK libspdk_lvol.so 00:01:54.299 CC lib/ftl/ftl_band.o 00:01:54.299 CC lib/ftl/ftl_band_ops.o 00:01:54.299 CC lib/ftl/ftl_writer.o 00:01:54.562 CC lib/ftl/ftl_rq.o 00:01:54.562 CC lib/ftl/ftl_reloc.o 00:01:54.562 CC lib/ftl/ftl_l2p_cache.o 00:01:54.562 CC lib/ftl/ftl_p2l.o 00:01:54.562 CC lib/ftl/mngt/ftl_mngt.o 00:01:54.562 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:54.562 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:54.562 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:54.562 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:54.562 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:54.562 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:54.562 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:54.829 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:54.829 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:54.829 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:54.829 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:54.829 LIB libspdk_nbd.a 00:01:54.829 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:54.829 SO libspdk_nbd.so.7.0 00:01:54.829 CC lib/ftl/utils/ftl_conf.o 00:01:55.091 CC lib/ftl/utils/ftl_md.o 00:01:55.091 CC lib/ftl/utils/ftl_mempool.o 00:01:55.091 SYMLINK libspdk_nbd.so 00:01:55.091 CC lib/ftl/utils/ftl_bitmap.o 00:01:55.091 LIB libspdk_scsi.a 00:01:55.091 CC lib/ftl/utils/ftl_property.o 00:01:55.091 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:55.091 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:55.091 SO libspdk_scsi.so.9.0 00:01:55.091 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:55.091 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:55.091 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:55.091 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:55.091 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:55.091 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:55.091 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:55.091 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:55.091 CC lib/ftl/base/ftl_base_dev.o 00:01:55.350 LIB libspdk_ublk.a 00:01:55.350 SYMLINK libspdk_scsi.so 00:01:55.350 CC lib/ftl/base/ftl_base_bdev.o 00:01:55.350 CC lib/ftl/ftl_trace.o 00:01:55.350 SO libspdk_ublk.so.3.0 00:01:55.350 SYMLINK libspdk_ublk.so 00:01:55.608 CC lib/iscsi/conn.o 00:01:55.608 CC lib/iscsi/init_grp.o 00:01:55.608 CC lib/iscsi/iscsi.o 00:01:55.608 CC lib/iscsi/md5.o 00:01:55.608 CC lib/iscsi/param.o 00:01:55.608 CC lib/iscsi/portal_grp.o 00:01:55.608 CC lib/iscsi/tgt_node.o 00:01:55.608 CC lib/iscsi/iscsi_subsystem.o 00:01:55.608 CC lib/iscsi/iscsi_rpc.o 00:01:55.608 CC lib/iscsi/task.o 00:01:55.608 CC lib/vhost/vhost.o 00:01:55.608 CC lib/vhost/vhost_rpc.o 00:01:55.608 CC lib/vhost/vhost_scsi.o 00:01:55.608 CC lib/vhost/vhost_blk.o 00:01:55.608 CC lib/vhost/rte_vhost_user.o 00:01:55.865 LIB libspdk_ftl.a 00:01:55.865 SO libspdk_ftl.so.9.0 00:01:56.434 SYMLINK libspdk_ftl.so 00:01:56.692 LIB libspdk_vhost.a 00:01:56.692 SO libspdk_vhost.so.8.0 00:01:56.951 SYMLINK libspdk_vhost.so 00:01:56.951 LIB libspdk_nvmf.a 00:01:56.951 LIB libspdk_iscsi.a 00:01:56.951 SO libspdk_nvmf.so.18.0 00:01:57.210 SO libspdk_iscsi.so.8.0 00:01:57.210 SYMLINK libspdk_iscsi.so 00:01:57.210 SYMLINK libspdk_nvmf.so 00:01:57.469 CC module/env_dpdk/env_dpdk_rpc.o 00:01:57.469 CC module/vfu_device/vfu_virtio.o 00:01:57.469 CC module/vfu_device/vfu_virtio_blk.o 00:01:57.469 CC module/vfu_device/vfu_virtio_scsi.o 00:01:57.469 CC module/vfu_device/vfu_virtio_rpc.o 00:01:57.727 CC module/keyring/file/keyring.o 00:01:57.727 CC module/accel/error/accel_error.o 00:01:57.727 CC module/keyring/file/keyring_rpc.o 00:01:57.727 CC module/accel/error/accel_error_rpc.o 00:01:57.727 CC module/accel/dsa/accel_dsa.o 00:01:57.727 CC module/sock/posix/posix.o 00:01:57.727 CC module/accel/iaa/accel_iaa.o 00:01:57.727 CC module/accel/dsa/accel_dsa_rpc.o 00:01:57.727 CC module/accel/ioat/accel_ioat.o 00:01:57.727 CC module/accel/ioat/accel_ioat_rpc.o 00:01:57.727 CC module/accel/iaa/accel_iaa_rpc.o 00:01:57.727 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:57.727 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:57.727 CC module/blob/bdev/blob_bdev.o 00:01:57.727 CC module/scheduler/gscheduler/gscheduler.o 00:01:57.727 LIB libspdk_env_dpdk_rpc.a 00:01:57.727 SO libspdk_env_dpdk_rpc.so.6.0 00:01:57.727 LIB libspdk_scheduler_dpdk_governor.a 00:01:57.727 SYMLINK libspdk_env_dpdk_rpc.so 00:01:57.727 LIB libspdk_accel_ioat.a 00:01:57.727 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:57.727 LIB libspdk_accel_error.a 00:01:57.985 LIB libspdk_accel_iaa.a 00:01:57.985 SO libspdk_accel_ioat.so.6.0 00:01:57.985 LIB libspdk_scheduler_gscheduler.a 00:01:57.985 SO libspdk_accel_error.so.2.0 00:01:57.985 LIB libspdk_keyring_file.a 00:01:57.985 LIB libspdk_scheduler_dynamic.a 00:01:57.985 LIB libspdk_accel_dsa.a 00:01:57.985 SO libspdk_accel_iaa.so.3.0 00:01:57.985 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:57.985 SO libspdk_keyring_file.so.1.0 00:01:57.985 SO libspdk_scheduler_dynamic.so.4.0 00:01:57.985 SO libspdk_scheduler_gscheduler.so.4.0 00:01:57.985 SO libspdk_accel_dsa.so.5.0 00:01:57.985 SYMLINK libspdk_accel_ioat.so 00:01:57.985 SYMLINK libspdk_accel_error.so 00:01:57.985 SYMLINK libspdk_scheduler_gscheduler.so 00:01:57.985 SYMLINK libspdk_accel_iaa.so 00:01:57.985 LIB libspdk_blob_bdev.a 00:01:57.985 SYMLINK libspdk_keyring_file.so 00:01:57.985 SYMLINK libspdk_scheduler_dynamic.so 00:01:57.985 SYMLINK libspdk_accel_dsa.so 00:01:57.985 SO libspdk_blob_bdev.so.11.0 00:01:57.985 SYMLINK libspdk_blob_bdev.so 00:01:58.251 LIB libspdk_vfu_device.a 00:01:58.251 SO libspdk_vfu_device.so.3.0 00:01:58.251 CC module/bdev/delay/vbdev_delay.o 00:01:58.251 CC module/blobfs/bdev/blobfs_bdev.o 00:01:58.251 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:58.251 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:58.251 CC module/bdev/passthru/vbdev_passthru.o 00:01:58.251 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:58.251 CC module/bdev/malloc/bdev_malloc.o 00:01:58.251 CC module/bdev/null/bdev_null.o 00:01:58.251 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:58.251 CC module/bdev/null/bdev_null_rpc.o 00:01:58.251 CC module/bdev/error/vbdev_error.o 00:01:58.251 CC module/bdev/error/vbdev_error_rpc.o 00:01:58.251 CC module/bdev/nvme/bdev_nvme.o 00:01:58.251 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:58.251 CC module/bdev/nvme/nvme_rpc.o 00:01:58.251 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:58.251 CC module/bdev/ftl/bdev_ftl.o 00:01:58.251 CC module/bdev/nvme/bdev_mdns_client.o 00:01:58.251 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:58.251 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:58.251 CC module/bdev/nvme/vbdev_opal.o 00:01:58.251 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:58.251 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:58.251 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:58.251 CC module/bdev/aio/bdev_aio.o 00:01:58.251 CC module/bdev/gpt/gpt.o 00:01:58.251 CC module/bdev/iscsi/bdev_iscsi.o 00:01:58.251 CC module/bdev/lvol/vbdev_lvol.o 00:01:58.251 CC module/bdev/split/vbdev_split.o 00:01:58.251 CC module/bdev/raid/bdev_raid.o 00:01:58.251 SYMLINK libspdk_vfu_device.so 00:01:58.251 CC module/bdev/aio/bdev_aio_rpc.o 00:01:58.818 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:58.818 CC module/bdev/split/vbdev_split_rpc.o 00:01:58.818 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:58.818 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:58.818 CC module/bdev/raid/bdev_raid_rpc.o 00:01:58.818 CC module/bdev/raid/bdev_raid_sb.o 00:01:58.818 LIB libspdk_blobfs_bdev.a 00:01:58.818 CC module/bdev/gpt/vbdev_gpt.o 00:01:58.818 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:58.818 CC module/bdev/raid/raid0.o 00:01:58.818 SO libspdk_blobfs_bdev.so.6.0 00:01:58.818 CC module/bdev/raid/raid1.o 00:01:58.818 CC module/bdev/raid/concat.o 00:01:58.818 LIB libspdk_bdev_null.a 00:01:58.818 SYMLINK libspdk_blobfs_bdev.so 00:01:58.818 SO libspdk_bdev_null.so.6.0 00:01:58.818 LIB libspdk_sock_posix.a 00:01:58.818 LIB libspdk_bdev_ftl.a 00:01:58.818 SO libspdk_sock_posix.so.6.0 00:01:58.818 LIB libspdk_bdev_error.a 00:01:58.818 LIB libspdk_bdev_zone_block.a 00:01:59.077 SO libspdk_bdev_ftl.so.6.0 00:01:59.077 SO libspdk_bdev_zone_block.so.6.0 00:01:59.077 LIB libspdk_bdev_passthru.a 00:01:59.077 SO libspdk_bdev_error.so.6.0 00:01:59.077 SYMLINK libspdk_bdev_null.so 00:01:59.077 LIB libspdk_bdev_iscsi.a 00:01:59.077 SO libspdk_bdev_passthru.so.6.0 00:01:59.077 LIB libspdk_bdev_delay.a 00:01:59.077 LIB libspdk_bdev_split.a 00:01:59.077 SO libspdk_bdev_iscsi.so.6.0 00:01:59.077 SYMLINK libspdk_bdev_zone_block.so 00:01:59.077 SYMLINK libspdk_bdev_ftl.so 00:01:59.077 LIB libspdk_bdev_malloc.a 00:01:59.077 LIB libspdk_bdev_aio.a 00:01:59.077 SYMLINK libspdk_sock_posix.so 00:01:59.077 SYMLINK libspdk_bdev_error.so 00:01:59.077 SO libspdk_bdev_delay.so.6.0 00:01:59.077 SO libspdk_bdev_split.so.6.0 00:01:59.077 SO libspdk_bdev_malloc.so.6.0 00:01:59.077 SO libspdk_bdev_aio.so.6.0 00:01:59.077 SYMLINK libspdk_bdev_passthru.so 00:01:59.077 SYMLINK libspdk_bdev_delay.so 00:01:59.077 SYMLINK libspdk_bdev_iscsi.so 00:01:59.077 SYMLINK libspdk_bdev_split.so 00:01:59.077 LIB libspdk_bdev_gpt.a 00:01:59.077 SYMLINK libspdk_bdev_malloc.so 00:01:59.077 SYMLINK libspdk_bdev_aio.so 00:01:59.077 LIB libspdk_bdev_lvol.a 00:01:59.077 SO libspdk_bdev_gpt.so.6.0 00:01:59.077 SO libspdk_bdev_lvol.so.6.0 00:01:59.335 SYMLINK libspdk_bdev_gpt.so 00:01:59.335 SYMLINK libspdk_bdev_lvol.so 00:01:59.335 LIB libspdk_bdev_virtio.a 00:01:59.335 SO libspdk_bdev_virtio.so.6.0 00:01:59.335 SYMLINK libspdk_bdev_virtio.so 00:01:59.594 LIB libspdk_bdev_raid.a 00:01:59.852 SO libspdk_bdev_raid.so.6.0 00:01:59.852 SYMLINK libspdk_bdev_raid.so 00:02:01.228 LIB libspdk_bdev_nvme.a 00:02:01.228 SO libspdk_bdev_nvme.so.7.0 00:02:01.489 SYMLINK libspdk_bdev_nvme.so 00:02:01.749 CC module/event/subsystems/vmd/vmd.o 00:02:01.749 CC module/event/subsystems/keyring/keyring.o 00:02:01.749 CC module/event/subsystems/iobuf/iobuf.o 00:02:01.749 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:01.749 CC module/event/subsystems/sock/sock.o 00:02:01.749 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:01.749 CC module/event/subsystems/scheduler/scheduler.o 00:02:01.749 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:01.749 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:02.009 LIB libspdk_event_sock.a 00:02:02.009 LIB libspdk_event_keyring.a 00:02:02.009 LIB libspdk_event_vhost_blk.a 00:02:02.009 LIB libspdk_event_scheduler.a 00:02:02.009 LIB libspdk_event_vmd.a 00:02:02.009 LIB libspdk_event_vfu_tgt.a 00:02:02.009 SO libspdk_event_sock.so.5.0 00:02:02.009 SO libspdk_event_keyring.so.1.0 00:02:02.009 LIB libspdk_event_iobuf.a 00:02:02.009 SO libspdk_event_vhost_blk.so.3.0 00:02:02.009 SO libspdk_event_vfu_tgt.so.3.0 00:02:02.009 SO libspdk_event_scheduler.so.4.0 00:02:02.009 SO libspdk_event_vmd.so.6.0 00:02:02.009 SO libspdk_event_iobuf.so.3.0 00:02:02.009 SYMLINK libspdk_event_keyring.so 00:02:02.009 SYMLINK libspdk_event_sock.so 00:02:02.009 SYMLINK libspdk_event_vhost_blk.so 00:02:02.009 SYMLINK libspdk_event_scheduler.so 00:02:02.009 SYMLINK libspdk_event_vfu_tgt.so 00:02:02.009 SYMLINK libspdk_event_vmd.so 00:02:02.009 SYMLINK libspdk_event_iobuf.so 00:02:02.267 CC module/event/subsystems/accel/accel.o 00:02:02.527 LIB libspdk_event_accel.a 00:02:02.527 SO libspdk_event_accel.so.6.0 00:02:02.527 SYMLINK libspdk_event_accel.so 00:02:02.787 CC module/event/subsystems/bdev/bdev.o 00:02:03.046 LIB libspdk_event_bdev.a 00:02:03.046 SO libspdk_event_bdev.so.6.0 00:02:03.046 SYMLINK libspdk_event_bdev.so 00:02:03.305 CC module/event/subsystems/nbd/nbd.o 00:02:03.305 CC module/event/subsystems/ublk/ublk.o 00:02:03.305 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:03.305 CC module/event/subsystems/scsi/scsi.o 00:02:03.305 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:03.305 LIB libspdk_event_ublk.a 00:02:03.305 LIB libspdk_event_nbd.a 00:02:03.305 LIB libspdk_event_scsi.a 00:02:03.305 SO libspdk_event_ublk.so.3.0 00:02:03.305 SO libspdk_event_nbd.so.6.0 00:02:03.305 SO libspdk_event_scsi.so.6.0 00:02:03.305 SYMLINK libspdk_event_nbd.so 00:02:03.305 SYMLINK libspdk_event_ublk.so 00:02:03.563 SYMLINK libspdk_event_scsi.so 00:02:03.563 LIB libspdk_event_nvmf.a 00:02:03.563 SO libspdk_event_nvmf.so.6.0 00:02:03.563 SYMLINK libspdk_event_nvmf.so 00:02:03.563 CC module/event/subsystems/iscsi/iscsi.o 00:02:03.563 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:03.823 LIB libspdk_event_vhost_scsi.a 00:02:03.823 LIB libspdk_event_iscsi.a 00:02:03.823 SO libspdk_event_vhost_scsi.so.3.0 00:02:03.823 SO libspdk_event_iscsi.so.6.0 00:02:03.823 SYMLINK libspdk_event_vhost_scsi.so 00:02:03.823 SYMLINK libspdk_event_iscsi.so 00:02:04.086 SO libspdk.so.6.0 00:02:04.086 SYMLINK libspdk.so 00:02:04.351 CXX app/trace/trace.o 00:02:04.351 CC app/trace_record/trace_record.o 00:02:04.351 TEST_HEADER include/spdk/accel.h 00:02:04.351 TEST_HEADER include/spdk/accel_module.h 00:02:04.351 CC app/spdk_nvme_discover/discovery_aer.o 00:02:04.351 CC app/spdk_top/spdk_top.o 00:02:04.351 TEST_HEADER include/spdk/assert.h 00:02:04.351 CC app/spdk_lspci/spdk_lspci.o 00:02:04.351 CC app/spdk_nvme_identify/identify.o 00:02:04.351 TEST_HEADER include/spdk/barrier.h 00:02:04.351 CC app/spdk_nvme_perf/perf.o 00:02:04.351 TEST_HEADER include/spdk/base64.h 00:02:04.351 TEST_HEADER include/spdk/bdev.h 00:02:04.351 TEST_HEADER include/spdk/bdev_module.h 00:02:04.351 TEST_HEADER include/spdk/bdev_zone.h 00:02:04.351 TEST_HEADER include/spdk/bit_array.h 00:02:04.351 TEST_HEADER include/spdk/bit_pool.h 00:02:04.351 TEST_HEADER include/spdk/blob_bdev.h 00:02:04.351 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:04.351 TEST_HEADER include/spdk/blobfs.h 00:02:04.351 TEST_HEADER include/spdk/blob.h 00:02:04.351 TEST_HEADER include/spdk/conf.h 00:02:04.351 TEST_HEADER include/spdk/config.h 00:02:04.351 TEST_HEADER include/spdk/cpuset.h 00:02:04.351 TEST_HEADER include/spdk/crc16.h 00:02:04.351 TEST_HEADER include/spdk/crc32.h 00:02:04.351 TEST_HEADER include/spdk/crc64.h 00:02:04.351 TEST_HEADER include/spdk/dif.h 00:02:04.351 TEST_HEADER include/spdk/dma.h 00:02:04.351 TEST_HEADER include/spdk/endian.h 00:02:04.351 TEST_HEADER include/spdk/env_dpdk.h 00:02:04.351 TEST_HEADER include/spdk/env.h 00:02:04.351 CC app/spdk_dd/spdk_dd.o 00:02:04.351 TEST_HEADER include/spdk/event.h 00:02:04.351 TEST_HEADER include/spdk/fd_group.h 00:02:04.351 CC app/vhost/vhost.o 00:02:04.351 CC app/nvmf_tgt/nvmf_main.o 00:02:04.351 TEST_HEADER include/spdk/fd.h 00:02:04.351 TEST_HEADER include/spdk/file.h 00:02:04.351 CC app/iscsi_tgt/iscsi_tgt.o 00:02:04.351 TEST_HEADER include/spdk/ftl.h 00:02:04.352 TEST_HEADER include/spdk/gpt_spec.h 00:02:04.352 TEST_HEADER include/spdk/hexlify.h 00:02:04.352 TEST_HEADER include/spdk/histogram_data.h 00:02:04.352 TEST_HEADER include/spdk/idxd.h 00:02:04.352 TEST_HEADER include/spdk/idxd_spec.h 00:02:04.352 TEST_HEADER include/spdk/init.h 00:02:04.352 TEST_HEADER include/spdk/ioat.h 00:02:04.352 CC examples/ioat/perf/perf.o 00:02:04.352 TEST_HEADER include/spdk/ioat_spec.h 00:02:04.352 TEST_HEADER include/spdk/iscsi_spec.h 00:02:04.352 TEST_HEADER include/spdk/json.h 00:02:04.352 TEST_HEADER include/spdk/jsonrpc.h 00:02:04.352 CC examples/nvme/hello_world/hello_world.o 00:02:04.352 CC examples/util/zipf/zipf.o 00:02:04.352 TEST_HEADER include/spdk/keyring.h 00:02:04.352 CC examples/vmd/lsvmd/lsvmd.o 00:02:04.352 TEST_HEADER include/spdk/keyring_module.h 00:02:04.352 CC test/event/event_perf/event_perf.o 00:02:04.352 CC test/nvme/aer/aer.o 00:02:04.352 TEST_HEADER include/spdk/likely.h 00:02:04.352 CC app/spdk_tgt/spdk_tgt.o 00:02:04.352 TEST_HEADER include/spdk/log.h 00:02:04.352 CC examples/accel/perf/accel_perf.o 00:02:04.352 TEST_HEADER include/spdk/lvol.h 00:02:04.352 TEST_HEADER include/spdk/memory.h 00:02:04.352 CC examples/sock/hello_world/hello_sock.o 00:02:04.352 TEST_HEADER include/spdk/mmio.h 00:02:04.352 TEST_HEADER include/spdk/nbd.h 00:02:04.352 TEST_HEADER include/spdk/notify.h 00:02:04.352 TEST_HEADER include/spdk/nvme.h 00:02:04.352 TEST_HEADER include/spdk/nvme_intel.h 00:02:04.352 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:04.352 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:04.352 TEST_HEADER include/spdk/nvme_spec.h 00:02:04.614 TEST_HEADER include/spdk/nvme_zns.h 00:02:04.614 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:04.614 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:04.614 TEST_HEADER include/spdk/nvmf.h 00:02:04.614 TEST_HEADER include/spdk/nvmf_spec.h 00:02:04.614 CC test/accel/dif/dif.o 00:02:04.614 TEST_HEADER include/spdk/nvmf_transport.h 00:02:04.614 TEST_HEADER include/spdk/opal.h 00:02:04.614 CC examples/blob/hello_world/hello_blob.o 00:02:04.614 CC test/dma/test_dma/test_dma.o 00:02:04.614 TEST_HEADER include/spdk/opal_spec.h 00:02:04.614 CC test/app/bdev_svc/bdev_svc.o 00:02:04.614 TEST_HEADER include/spdk/pci_ids.h 00:02:04.614 CC test/bdev/bdevio/bdevio.o 00:02:04.614 CC test/blobfs/mkfs/mkfs.o 00:02:04.614 TEST_HEADER include/spdk/pipe.h 00:02:04.614 TEST_HEADER include/spdk/queue.h 00:02:04.614 CC examples/bdev/hello_world/hello_bdev.o 00:02:04.614 TEST_HEADER include/spdk/reduce.h 00:02:04.614 TEST_HEADER include/spdk/rpc.h 00:02:04.614 TEST_HEADER include/spdk/scheduler.h 00:02:04.614 TEST_HEADER include/spdk/scsi.h 00:02:04.614 CC examples/thread/thread/thread_ex.o 00:02:04.614 TEST_HEADER include/spdk/scsi_spec.h 00:02:04.614 CC examples/nvmf/nvmf/nvmf.o 00:02:04.614 TEST_HEADER include/spdk/sock.h 00:02:04.614 TEST_HEADER include/spdk/stdinc.h 00:02:04.614 TEST_HEADER include/spdk/string.h 00:02:04.614 TEST_HEADER include/spdk/thread.h 00:02:04.614 CC test/env/mem_callbacks/mem_callbacks.o 00:02:04.614 TEST_HEADER include/spdk/trace.h 00:02:04.614 TEST_HEADER include/spdk/trace_parser.h 00:02:04.614 TEST_HEADER include/spdk/tree.h 00:02:04.614 TEST_HEADER include/spdk/ublk.h 00:02:04.614 CC test/lvol/esnap/esnap.o 00:02:04.614 TEST_HEADER include/spdk/util.h 00:02:04.614 TEST_HEADER include/spdk/uuid.h 00:02:04.614 TEST_HEADER include/spdk/version.h 00:02:04.614 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:04.614 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:04.614 TEST_HEADER include/spdk/vhost.h 00:02:04.614 TEST_HEADER include/spdk/vmd.h 00:02:04.614 LINK spdk_lspci 00:02:04.614 TEST_HEADER include/spdk/xor.h 00:02:04.614 TEST_HEADER include/spdk/zipf.h 00:02:04.614 CXX test/cpp_headers/accel.o 00:02:04.614 LINK spdk_nvme_discover 00:02:04.614 LINK lsvmd 00:02:04.876 LINK spdk_trace_record 00:02:04.876 LINK event_perf 00:02:04.876 LINK nvmf_tgt 00:02:04.876 LINK zipf 00:02:04.876 LINK ioat_perf 00:02:04.876 LINK vhost 00:02:04.876 LINK iscsi_tgt 00:02:04.876 LINK bdev_svc 00:02:04.876 LINK spdk_tgt 00:02:04.876 LINK mkfs 00:02:04.876 LINK hello_world 00:02:04.876 LINK hello_sock 00:02:05.136 LINK aer 00:02:05.136 LINK hello_bdev 00:02:05.136 LINK spdk_dd 00:02:05.136 LINK hello_blob 00:02:05.136 CXX test/cpp_headers/accel_module.o 00:02:05.136 LINK spdk_trace 00:02:05.136 LINK thread 00:02:05.136 CC examples/blob/cli/blobcli.o 00:02:05.136 LINK nvmf 00:02:05.136 CC test/event/reactor/reactor.o 00:02:05.136 CC test/rpc_client/rpc_client_test.o 00:02:05.136 LINK dif 00:02:05.136 CC test/event/reactor_perf/reactor_perf.o 00:02:05.136 LINK test_dma 00:02:05.136 CC examples/vmd/led/led.o 00:02:05.136 CC examples/nvme/reconnect/reconnect.o 00:02:05.136 CC examples/ioat/verify/verify.o 00:02:05.399 LINK accel_perf 00:02:05.399 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:05.399 LINK bdevio 00:02:05.399 CC test/event/app_repeat/app_repeat.o 00:02:05.399 CXX test/cpp_headers/assert.o 00:02:05.399 CC test/nvme/reset/reset.o 00:02:05.399 CC test/event/scheduler/scheduler.o 00:02:05.399 CC examples/nvme/arbitration/arbitration.o 00:02:05.399 CC examples/bdev/bdevperf/bdevperf.o 00:02:05.399 LINK reactor_perf 00:02:05.399 LINK reactor 00:02:05.399 LINK led 00:02:05.399 CC test/app/histogram_perf/histogram_perf.o 00:02:05.399 LINK rpc_client_test 00:02:05.661 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:05.661 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:05.661 CC test/app/jsoncat/jsoncat.o 00:02:05.661 CC examples/idxd/perf/perf.o 00:02:05.661 CC test/app/stub/stub.o 00:02:05.661 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:05.661 CC app/fio/nvme/fio_plugin.o 00:02:05.661 LINK app_repeat 00:02:05.661 CC examples/nvme/hotplug/hotplug.o 00:02:05.661 CC test/nvme/sgl/sgl.o 00:02:05.661 LINK verify 00:02:05.661 CXX test/cpp_headers/barrier.o 00:02:05.661 CC test/nvme/e2edp/nvme_dp.o 00:02:05.661 CC test/env/vtophys/vtophys.o 00:02:05.661 LINK spdk_nvme_identify 00:02:05.661 LINK spdk_nvme_perf 00:02:05.925 CXX test/cpp_headers/base64.o 00:02:05.925 LINK histogram_perf 00:02:05.925 CXX test/cpp_headers/bdev.o 00:02:05.925 LINK spdk_top 00:02:05.925 LINK scheduler 00:02:05.925 LINK mem_callbacks 00:02:05.925 LINK jsoncat 00:02:05.925 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:05.925 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:05.925 LINK reset 00:02:05.925 LINK stub 00:02:05.925 LINK reconnect 00:02:05.925 LINK interrupt_tgt 00:02:06.187 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:06.187 LINK vtophys 00:02:06.187 CC test/thread/poller_perf/poller_perf.o 00:02:06.187 LINK blobcli 00:02:06.187 CC test/nvme/overhead/overhead.o 00:02:06.187 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:06.187 LINK arbitration 00:02:06.187 CXX test/cpp_headers/bdev_module.o 00:02:06.187 LINK hotplug 00:02:06.187 CC examples/nvme/abort/abort.o 00:02:06.187 LINK idxd_perf 00:02:06.187 CC test/env/memory/memory_ut.o 00:02:06.187 CC test/env/pci/pci_ut.o 00:02:06.187 LINK sgl 00:02:06.187 LINK nvme_fuzz 00:02:06.187 LINK nvme_dp 00:02:06.187 LINK env_dpdk_post_init 00:02:06.187 CXX test/cpp_headers/bdev_zone.o 00:02:06.187 CXX test/cpp_headers/bit_array.o 00:02:06.187 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:06.447 CC test/nvme/err_injection/err_injection.o 00:02:06.447 LINK nvme_manage 00:02:06.447 CC test/nvme/startup/startup.o 00:02:06.447 CC app/fio/bdev/fio_plugin.o 00:02:06.447 LINK poller_perf 00:02:06.447 CC test/nvme/reserve/reserve.o 00:02:06.447 CC test/nvme/simple_copy/simple_copy.o 00:02:06.447 CC test/nvme/connect_stress/connect_stress.o 00:02:06.447 CC test/nvme/boot_partition/boot_partition.o 00:02:06.447 CXX test/cpp_headers/bit_pool.o 00:02:06.710 CXX test/cpp_headers/blob_bdev.o 00:02:06.710 LINK cmb_copy 00:02:06.710 CC test/nvme/compliance/nvme_compliance.o 00:02:06.710 CXX test/cpp_headers/blobfs_bdev.o 00:02:06.710 CXX test/cpp_headers/blobfs.o 00:02:06.710 CXX test/cpp_headers/blob.o 00:02:06.710 CXX test/cpp_headers/conf.o 00:02:06.710 LINK spdk_nvme 00:02:06.710 CXX test/cpp_headers/config.o 00:02:06.710 CC test/nvme/fused_ordering/fused_ordering.o 00:02:06.710 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:06.710 LINK pmr_persistence 00:02:06.710 CXX test/cpp_headers/cpuset.o 00:02:06.710 CXX test/cpp_headers/crc16.o 00:02:06.710 CC test/nvme/fdp/fdp.o 00:02:06.710 LINK overhead 00:02:06.710 CXX test/cpp_headers/crc32.o 00:02:06.710 LINK startup 00:02:06.710 CC test/nvme/cuse/cuse.o 00:02:06.710 LINK err_injection 00:02:06.710 CXX test/cpp_headers/crc64.o 00:02:06.710 LINK reserve 00:02:06.973 LINK vhost_fuzz 00:02:06.973 LINK connect_stress 00:02:06.973 CXX test/cpp_headers/dif.o 00:02:06.973 LINK boot_partition 00:02:06.973 CXX test/cpp_headers/dma.o 00:02:06.973 LINK simple_copy 00:02:06.973 CXX test/cpp_headers/endian.o 00:02:06.973 LINK abort 00:02:06.973 LINK pci_ut 00:02:06.973 CXX test/cpp_headers/env_dpdk.o 00:02:06.973 CXX test/cpp_headers/env.o 00:02:06.973 CXX test/cpp_headers/event.o 00:02:06.973 CXX test/cpp_headers/fd_group.o 00:02:06.973 CXX test/cpp_headers/fd.o 00:02:06.973 CXX test/cpp_headers/file.o 00:02:06.973 LINK doorbell_aers 00:02:06.973 CXX test/cpp_headers/ftl.o 00:02:06.973 CXX test/cpp_headers/gpt_spec.o 00:02:06.973 CXX test/cpp_headers/hexlify.o 00:02:06.973 CXX test/cpp_headers/histogram_data.o 00:02:06.973 LINK fused_ordering 00:02:06.973 LINK bdevperf 00:02:06.973 CXX test/cpp_headers/idxd.o 00:02:07.237 CXX test/cpp_headers/idxd_spec.o 00:02:07.238 CXX test/cpp_headers/init.o 00:02:07.238 CXX test/cpp_headers/ioat.o 00:02:07.238 CXX test/cpp_headers/ioat_spec.o 00:02:07.238 CXX test/cpp_headers/iscsi_spec.o 00:02:07.238 CXX test/cpp_headers/json.o 00:02:07.238 CXX test/cpp_headers/jsonrpc.o 00:02:07.238 LINK nvme_compliance 00:02:07.238 CXX test/cpp_headers/keyring.o 00:02:07.238 LINK fdp 00:02:07.238 CXX test/cpp_headers/keyring_module.o 00:02:07.238 CXX test/cpp_headers/likely.o 00:02:07.238 CXX test/cpp_headers/log.o 00:02:07.238 CXX test/cpp_headers/lvol.o 00:02:07.500 CXX test/cpp_headers/memory.o 00:02:07.500 CXX test/cpp_headers/mmio.o 00:02:07.500 CXX test/cpp_headers/nbd.o 00:02:07.500 CXX test/cpp_headers/notify.o 00:02:07.500 CXX test/cpp_headers/nvme.o 00:02:07.500 LINK spdk_bdev 00:02:07.500 CXX test/cpp_headers/nvme_intel.o 00:02:07.500 CXX test/cpp_headers/nvme_ocssd.o 00:02:07.500 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:07.500 CXX test/cpp_headers/nvme_spec.o 00:02:07.500 CXX test/cpp_headers/nvme_zns.o 00:02:07.500 CXX test/cpp_headers/nvmf_cmd.o 00:02:07.500 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:07.500 CXX test/cpp_headers/nvmf.o 00:02:07.500 CXX test/cpp_headers/nvmf_spec.o 00:02:07.500 CXX test/cpp_headers/nvmf_transport.o 00:02:07.500 CXX test/cpp_headers/opal.o 00:02:07.500 CXX test/cpp_headers/opal_spec.o 00:02:07.500 CXX test/cpp_headers/pci_ids.o 00:02:07.500 CXX test/cpp_headers/pipe.o 00:02:07.500 CXX test/cpp_headers/queue.o 00:02:07.500 CXX test/cpp_headers/reduce.o 00:02:07.500 CXX test/cpp_headers/rpc.o 00:02:07.500 CXX test/cpp_headers/scheduler.o 00:02:07.500 CXX test/cpp_headers/scsi.o 00:02:07.500 CXX test/cpp_headers/scsi_spec.o 00:02:07.762 CXX test/cpp_headers/sock.o 00:02:07.762 CXX test/cpp_headers/stdinc.o 00:02:07.762 CXX test/cpp_headers/string.o 00:02:07.762 CXX test/cpp_headers/thread.o 00:02:07.762 CXX test/cpp_headers/trace.o 00:02:07.762 CXX test/cpp_headers/trace_parser.o 00:02:07.762 CXX test/cpp_headers/tree.o 00:02:07.762 CXX test/cpp_headers/ublk.o 00:02:07.762 CXX test/cpp_headers/util.o 00:02:07.762 CXX test/cpp_headers/uuid.o 00:02:07.762 CXX test/cpp_headers/version.o 00:02:07.762 CXX test/cpp_headers/vfio_user_pci.o 00:02:07.762 CXX test/cpp_headers/vfio_user_spec.o 00:02:08.020 CXX test/cpp_headers/vhost.o 00:02:08.020 CXX test/cpp_headers/vmd.o 00:02:08.020 CXX test/cpp_headers/xor.o 00:02:08.020 CXX test/cpp_headers/zipf.o 00:02:08.020 LINK memory_ut 00:02:08.278 LINK iscsi_fuzz 00:02:08.278 LINK cuse 00:02:11.559 LINK esnap 00:02:11.559 00:02:11.559 real 0m56.528s 00:02:11.559 user 10m59.530s 00:02:11.559 sys 2m21.758s 00:02:11.559 14:06:52 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:11.559 14:06:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.559 ************************************ 00:02:11.559 END TEST make 00:02:11.559 ************************************ 00:02:11.559 14:06:52 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:11.559 14:06:52 -- pm/common@30 -- $ signal_monitor_resources TERM 00:02:11.559 14:06:52 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:02:11.559 14:06:52 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.559 14:06:52 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:11.559 14:06:52 -- pm/common@45 -- $ pid=2980244 00:02:11.559 14:06:52 -- pm/common@52 -- $ sudo kill -TERM 2980244 00:02:11.559 14:06:52 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.559 14:06:52 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:11.559 14:06:52 -- pm/common@45 -- $ pid=2980246 00:02:11.559 14:06:52 -- pm/common@52 -- $ sudo kill -TERM 2980246 00:02:11.559 14:06:52 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.559 14:06:52 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:11.559 14:06:52 -- pm/common@45 -- $ pid=2980245 00:02:11.559 14:06:52 -- pm/common@52 -- $ sudo kill -TERM 2980245 00:02:11.559 14:06:52 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.559 14:06:52 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:11.559 14:06:52 -- pm/common@45 -- $ pid=2980247 00:02:11.559 14:06:52 -- pm/common@52 -- $ sudo kill -TERM 2980247 00:02:11.559 14:06:52 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:11.559 14:06:52 -- nvmf/common.sh@7 -- # uname -s 00:02:11.559 14:06:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:11.559 14:06:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:11.559 14:06:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:11.559 14:06:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:11.559 14:06:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:11.559 14:06:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:11.559 14:06:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:11.559 14:06:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:11.559 14:06:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:11.559 14:06:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:11.559 14:06:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:02:11.559 14:06:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:02:11.559 14:06:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:11.559 14:06:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:11.559 14:06:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:11.559 14:06:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:11.559 14:06:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:11.559 14:06:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:11.559 14:06:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:11.559 14:06:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:11.559 14:06:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.559 14:06:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.559 14:06:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.559 14:06:52 -- paths/export.sh@5 -- # export PATH 00:02:11.559 14:06:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.559 14:06:52 -- nvmf/common.sh@47 -- # : 0 00:02:11.559 14:06:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:11.559 14:06:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:11.559 14:06:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:11.559 14:06:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:11.559 14:06:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:11.559 14:06:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:11.559 14:06:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:11.559 14:06:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:11.559 14:06:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:11.559 14:06:52 -- spdk/autotest.sh@32 -- # uname -s 00:02:11.559 14:06:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:11.559 14:06:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:11.559 14:06:52 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:11.559 14:06:52 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:11.559 14:06:52 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:11.559 14:06:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:11.559 14:06:52 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:11.559 14:06:52 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:11.559 14:06:52 -- spdk/autotest.sh@48 -- # udevadm_pid=3034561 00:02:11.559 14:06:52 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:11.559 14:06:52 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:11.559 14:06:52 -- pm/common@17 -- # local monitor 00:02:11.559 14:06:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.559 14:06:52 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3034562 00:02:11.559 14:06:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.559 14:06:52 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3034565 00:02:11.559 14:06:52 -- pm/common@21 -- # date +%s 00:02:11.559 14:06:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.559 14:06:52 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3034568 00:02:11.559 14:06:52 -- pm/common@21 -- # date +%s 00:02:11.559 14:06:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.559 14:06:52 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=3034572 00:02:11.559 14:06:52 -- pm/common@21 -- # date +%s 00:02:11.559 14:06:52 -- pm/common@26 -- # sleep 1 00:02:11.559 14:06:52 -- pm/common@21 -- # date +%s 00:02:11.559 14:06:52 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714133212 00:02:11.559 14:06:52 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714133212 00:02:11.559 14:06:52 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714133212 00:02:11.559 14:06:52 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714133212 00:02:11.559 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714133212_collect-vmstat.pm.log 00:02:11.559 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714133212_collect-bmc-pm.bmc.pm.log 00:02:11.559 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714133212_collect-cpu-load.pm.log 00:02:11.559 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714133212_collect-cpu-temp.pm.log 00:02:12.508 14:06:53 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:12.508 14:06:53 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:12.508 14:06:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:12.508 14:06:53 -- common/autotest_common.sh@10 -- # set +x 00:02:12.508 14:06:53 -- spdk/autotest.sh@59 -- # create_test_list 00:02:12.508 14:06:53 -- common/autotest_common.sh@734 -- # xtrace_disable 00:02:12.508 14:06:53 -- common/autotest_common.sh@10 -- # set +x 00:02:12.508 14:06:54 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:12.508 14:06:54 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:12.508 14:06:54 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:12.508 14:06:54 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:12.508 14:06:54 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:12.508 14:06:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:12.508 14:06:54 -- common/autotest_common.sh@1441 -- # uname 00:02:12.508 14:06:54 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:02:12.508 14:06:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:12.508 14:06:54 -- common/autotest_common.sh@1461 -- # uname 00:02:12.508 14:06:54 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:02:12.508 14:06:54 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:12.508 14:06:54 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:12.508 14:06:54 -- spdk/autotest.sh@72 -- # hash lcov 00:02:12.508 14:06:54 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:12.508 14:06:54 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:12.508 --rc lcov_branch_coverage=1 00:02:12.508 --rc lcov_function_coverage=1 00:02:12.508 --rc genhtml_branch_coverage=1 00:02:12.508 --rc genhtml_function_coverage=1 00:02:12.508 --rc genhtml_legend=1 00:02:12.508 --rc geninfo_all_blocks=1 00:02:12.508 ' 00:02:12.508 14:06:54 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:12.508 --rc lcov_branch_coverage=1 00:02:12.508 --rc lcov_function_coverage=1 00:02:12.508 --rc genhtml_branch_coverage=1 00:02:12.508 --rc genhtml_function_coverage=1 00:02:12.508 --rc genhtml_legend=1 00:02:12.508 --rc geninfo_all_blocks=1 00:02:12.508 ' 00:02:12.508 14:06:54 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:12.508 --rc lcov_branch_coverage=1 00:02:12.508 --rc lcov_function_coverage=1 00:02:12.508 --rc genhtml_branch_coverage=1 00:02:12.508 --rc genhtml_function_coverage=1 00:02:12.508 --rc genhtml_legend=1 00:02:12.508 --rc geninfo_all_blocks=1 00:02:12.508 --no-external' 00:02:12.508 14:06:54 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:12.508 --rc lcov_branch_coverage=1 00:02:12.508 --rc lcov_function_coverage=1 00:02:12.508 --rc genhtml_branch_coverage=1 00:02:12.508 --rc genhtml_function_coverage=1 00:02:12.508 --rc genhtml_legend=1 00:02:12.508 --rc geninfo_all_blocks=1 00:02:12.508 --no-external' 00:02:12.508 14:06:54 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:12.766 lcov: LCOV version 1.14 00:02:12.766 14:06:54 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:22.831 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:22.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:22.832 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:28.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:28.098 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:40.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:40.311 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:40.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:40.311 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:40.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:40.311 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:46.906 14:07:28 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:46.906 14:07:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:46.906 14:07:28 -- common/autotest_common.sh@10 -- # set +x 00:02:46.906 14:07:28 -- spdk/autotest.sh@91 -- # rm -f 00:02:46.906 14:07:28 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.862 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:02:47.862 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:02:47.862 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:02:47.862 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:02:47.862 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:02:47.862 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:02:47.862 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:02:47.862 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:02:47.862 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:02:47.862 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:02:47.862 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:02:47.862 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:02:47.862 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:02:47.862 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:02:47.862 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:02:47.862 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:02:47.862 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:02:47.862 14:07:29 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:47.862 14:07:29 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:47.862 14:07:29 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:47.862 14:07:29 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:47.862 14:07:29 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:47.862 14:07:29 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:47.862 14:07:29 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:47.862 14:07:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:47.862 14:07:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:47.862 14:07:29 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:47.862 14:07:29 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:47.862 14:07:29 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:47.862 14:07:29 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:47.862 14:07:29 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:47.862 14:07:29 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:47.862 No valid GPT data, bailing 00:02:47.862 14:07:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:47.862 14:07:29 -- scripts/common.sh@391 -- # pt= 00:02:47.862 14:07:29 -- scripts/common.sh@392 -- # return 1 00:02:47.862 14:07:29 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:47.862 1+0 records in 00:02:47.862 1+0 records out 00:02:47.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00206085 s, 509 MB/s 00:02:47.862 14:07:29 -- spdk/autotest.sh@118 -- # sync 00:02:47.862 14:07:29 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:47.862 14:07:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:47.862 14:07:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:49.766 14:07:31 -- spdk/autotest.sh@124 -- # uname -s 00:02:49.766 14:07:31 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:49.766 14:07:31 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:49.766 14:07:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:49.766 14:07:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:49.766 14:07:31 -- common/autotest_common.sh@10 -- # set +x 00:02:49.766 ************************************ 00:02:49.766 START TEST setup.sh 00:02:49.766 ************************************ 00:02:49.766 14:07:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:49.766 * Looking for test storage... 00:02:49.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:49.766 14:07:31 -- setup/test-setup.sh@10 -- # uname -s 00:02:49.766 14:07:31 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:49.766 14:07:31 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:49.766 14:07:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:49.766 14:07:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:49.766 14:07:31 -- common/autotest_common.sh@10 -- # set +x 00:02:50.025 ************************************ 00:02:50.025 START TEST acl 00:02:50.025 ************************************ 00:02:50.025 14:07:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:50.025 * Looking for test storage... 00:02:50.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:50.025 14:07:31 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:50.025 14:07:31 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:50.025 14:07:31 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:50.025 14:07:31 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:50.025 14:07:31 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:50.025 14:07:31 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:50.025 14:07:31 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:50.025 14:07:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:50.025 14:07:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:50.025 14:07:31 -- setup/acl.sh@12 -- # devs=() 00:02:50.025 14:07:31 -- setup/acl.sh@12 -- # declare -a devs 00:02:50.025 14:07:31 -- setup/acl.sh@13 -- # drivers=() 00:02:50.025 14:07:31 -- setup/acl.sh@13 -- # declare -A drivers 00:02:50.025 14:07:31 -- setup/acl.sh@51 -- # setup reset 00:02:50.025 14:07:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:50.025 14:07:31 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:51.402 14:07:32 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:51.402 14:07:32 -- setup/acl.sh@16 -- # local dev driver 00:02:51.402 14:07:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.402 14:07:32 -- setup/acl.sh@15 -- # setup output status 00:02:51.402 14:07:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.402 14:07:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:51.982 Hugepages 00:02:51.982 node hugesize free / total 00:02:51.982 14:07:33 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:51.982 14:07:33 -- setup/acl.sh@19 -- # continue 00:02:51.982 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # continue 00:02:52.242 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # continue 00:02:52.242 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.242 00:02:52.242 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # continue 00:02:52.242 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # continue 00:02:52.242 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # continue 00:02:52.242 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # continue 00:02:52.242 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # continue 00:02:52.242 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # continue 00:02:52.242 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # continue 00:02:52.242 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # continue 00:02:52.242 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # continue 00:02:52.242 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # continue 00:02:52.242 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # continue 00:02:52.242 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # continue 00:02:52.242 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # continue 00:02:52.242 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # continue 00:02:52.242 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # continue 00:02:52.242 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.242 14:07:33 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.242 14:07:33 -- setup/acl.sh@20 -- # continue 00:02:52.242 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.243 14:07:33 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:52.243 14:07:33 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.243 14:07:33 -- setup/acl.sh@20 -- # continue 00:02:52.243 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.243 14:07:33 -- setup/acl.sh@19 -- # [[ 0000:84:00.0 == *:*:*.* ]] 00:02:52.243 14:07:33 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:52.243 14:07:33 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:02:52.243 14:07:33 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:52.243 14:07:33 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:52.243 14:07:33 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.243 14:07:33 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:52.243 14:07:33 -- setup/acl.sh@54 -- # run_test denied denied 00:02:52.243 14:07:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:52.243 14:07:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:52.243 14:07:33 -- common/autotest_common.sh@10 -- # set +x 00:02:52.243 ************************************ 00:02:52.243 START TEST denied 00:02:52.243 ************************************ 00:02:52.243 14:07:33 -- common/autotest_common.sh@1111 -- # denied 00:02:52.243 14:07:33 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:84:00.0' 00:02:52.243 14:07:33 -- setup/acl.sh@38 -- # setup output config 00:02:52.243 14:07:33 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:84:00.0' 00:02:52.243 14:07:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.243 14:07:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:53.620 0000:84:00.0 (8086 0a54): Skipping denied controller at 0000:84:00.0 00:02:53.620 14:07:34 -- setup/acl.sh@40 -- # verify 0000:84:00.0 00:02:53.620 14:07:34 -- setup/acl.sh@28 -- # local dev driver 00:02:53.620 14:07:34 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:53.620 14:07:34 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:84:00.0 ]] 00:02:53.620 14:07:34 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:84:00.0/driver 00:02:53.620 14:07:34 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:53.620 14:07:34 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:53.620 14:07:34 -- setup/acl.sh@41 -- # setup reset 00:02:53.620 14:07:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:53.620 14:07:34 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:55.524 00:02:55.524 real 0m3.204s 00:02:55.524 user 0m0.919s 00:02:55.524 sys 0m1.509s 00:02:55.524 14:07:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:55.524 14:07:36 -- common/autotest_common.sh@10 -- # set +x 00:02:55.524 ************************************ 00:02:55.524 END TEST denied 00:02:55.524 ************************************ 00:02:55.524 14:07:37 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:55.524 14:07:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:55.524 14:07:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:55.524 14:07:37 -- common/autotest_common.sh@10 -- # set +x 00:02:55.783 ************************************ 00:02:55.783 START TEST allowed 00:02:55.783 ************************************ 00:02:55.783 14:07:37 -- common/autotest_common.sh@1111 -- # allowed 00:02:55.783 14:07:37 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:84:00.0 00:02:55.783 14:07:37 -- setup/acl.sh@46 -- # grep -E '0000:84:00.0 .*: nvme -> .*' 00:02:55.783 14:07:37 -- setup/acl.sh@45 -- # setup output config 00:02:55.783 14:07:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.783 14:07:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:57.689 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:02:57.689 14:07:39 -- setup/acl.sh@47 -- # verify 00:02:57.689 14:07:39 -- setup/acl.sh@28 -- # local dev driver 00:02:57.689 14:07:39 -- setup/acl.sh@48 -- # setup reset 00:02:57.689 14:07:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:57.689 14:07:39 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.069 00:02:59.069 real 0m3.251s 00:02:59.069 user 0m0.882s 00:02:59.069 sys 0m1.440s 00:02:59.069 14:07:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:59.069 14:07:40 -- common/autotest_common.sh@10 -- # set +x 00:02:59.069 ************************************ 00:02:59.069 END TEST allowed 00:02:59.069 ************************************ 00:02:59.069 00:02:59.069 real 0m9.047s 00:02:59.069 user 0m2.823s 00:02:59.069 sys 0m4.594s 00:02:59.069 14:07:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:59.069 14:07:40 -- common/autotest_common.sh@10 -- # set +x 00:02:59.069 ************************************ 00:02:59.069 END TEST acl 00:02:59.069 ************************************ 00:02:59.069 14:07:40 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:59.069 14:07:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:59.069 14:07:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:59.069 14:07:40 -- common/autotest_common.sh@10 -- # set +x 00:02:59.069 ************************************ 00:02:59.069 START TEST hugepages 00:02:59.069 ************************************ 00:02:59.069 14:07:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:59.069 * Looking for test storage... 00:02:59.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:59.069 14:07:40 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:59.069 14:07:40 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:59.069 14:07:40 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:59.069 14:07:40 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:59.069 14:07:40 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:59.069 14:07:40 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:59.069 14:07:40 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:59.069 14:07:40 -- setup/common.sh@18 -- # local node= 00:02:59.069 14:07:40 -- setup/common.sh@19 -- # local var val 00:02:59.069 14:07:40 -- setup/common.sh@20 -- # local mem_f mem 00:02:59.069 14:07:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.069 14:07:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.069 14:07:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.069 14:07:40 -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.069 14:07:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 30212336 kB' 'MemAvailable: 34583568 kB' 'Buffers: 2696 kB' 'Cached: 15716532 kB' 'SwapCached: 0 kB' 'Active: 12787024 kB' 'Inactive: 3552704 kB' 'Active(anon): 11591812 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 623636 kB' 'Mapped: 221252 kB' 'Shmem: 10971312 kB' 'KReclaimable: 184396 kB' 'Slab: 460176 kB' 'SReclaimable: 184396 kB' 'SUnreclaim: 275780 kB' 'KernelStack: 10032 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32437040 kB' 'Committed_AS: 12635568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189792 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.069 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.069 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # continue 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.070 14:07:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.070 14:07:40 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.070 14:07:40 -- setup/common.sh@33 -- # echo 2048 00:02:59.070 14:07:40 -- setup/common.sh@33 -- # return 0 00:02:59.070 14:07:40 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:59.070 14:07:40 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:59.070 14:07:40 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:59.070 14:07:40 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:59.070 14:07:40 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:59.070 14:07:40 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:59.070 14:07:40 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:59.070 14:07:40 -- setup/hugepages.sh@207 -- # get_nodes 00:02:59.070 14:07:40 -- setup/hugepages.sh@27 -- # local node 00:02:59.070 14:07:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.070 14:07:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:59.070 14:07:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.070 14:07:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:59.070 14:07:40 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:59.070 14:07:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:59.070 14:07:40 -- setup/hugepages.sh@208 -- # clear_hp 00:02:59.070 14:07:40 -- setup/hugepages.sh@37 -- # local node hp 00:02:59.070 14:07:40 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:59.070 14:07:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:59.070 14:07:40 -- setup/hugepages.sh@41 -- # echo 0 00:02:59.070 14:07:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:59.070 14:07:40 -- setup/hugepages.sh@41 -- # echo 0 00:02:59.335 14:07:40 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:59.335 14:07:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:59.335 14:07:40 -- setup/hugepages.sh@41 -- # echo 0 00:02:59.335 14:07:40 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:59.335 14:07:40 -- setup/hugepages.sh@41 -- # echo 0 00:02:59.335 14:07:40 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:59.335 14:07:40 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:59.335 14:07:40 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:59.335 14:07:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:59.335 14:07:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:59.335 14:07:40 -- common/autotest_common.sh@10 -- # set +x 00:02:59.335 ************************************ 00:02:59.335 START TEST default_setup 00:02:59.335 ************************************ 00:02:59.335 14:07:40 -- common/autotest_common.sh@1111 -- # default_setup 00:02:59.335 14:07:40 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:59.335 14:07:40 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:59.335 14:07:40 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:59.335 14:07:40 -- setup/hugepages.sh@51 -- # shift 00:02:59.335 14:07:40 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:59.335 14:07:40 -- setup/hugepages.sh@52 -- # local node_ids 00:02:59.335 14:07:40 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:59.335 14:07:40 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:59.335 14:07:40 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:59.335 14:07:40 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:59.335 14:07:40 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:59.335 14:07:40 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:59.335 14:07:40 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:59.335 14:07:40 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:59.335 14:07:40 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:59.335 14:07:40 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:59.335 14:07:40 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:59.335 14:07:40 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:59.335 14:07:40 -- setup/hugepages.sh@73 -- # return 0 00:02:59.335 14:07:40 -- setup/hugepages.sh@137 -- # setup output 00:02:59.335 14:07:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.335 14:07:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:00.271 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:00.271 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:00.271 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:00.271 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:00.271 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:00.271 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:00.271 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:00.271 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:00.272 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:00.272 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:00.272 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:00.272 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:00.272 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:00.272 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:00.272 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:00.272 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:01.211 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:03:01.211 14:07:42 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:01.211 14:07:42 -- setup/hugepages.sh@89 -- # local node 00:03:01.211 14:07:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:01.211 14:07:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:01.211 14:07:42 -- setup/hugepages.sh@92 -- # local surp 00:03:01.211 14:07:42 -- setup/hugepages.sh@93 -- # local resv 00:03:01.211 14:07:42 -- setup/hugepages.sh@94 -- # local anon 00:03:01.211 14:07:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:01.211 14:07:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:01.211 14:07:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:01.211 14:07:42 -- setup/common.sh@18 -- # local node= 00:03:01.211 14:07:42 -- setup/common.sh@19 -- # local var val 00:03:01.211 14:07:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.211 14:07:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.211 14:07:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.211 14:07:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.211 14:07:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.211 14:07:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 14:07:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32318808 kB' 'MemAvailable: 36690024 kB' 'Buffers: 2696 kB' 'Cached: 15716608 kB' 'SwapCached: 0 kB' 'Active: 12808096 kB' 'Inactive: 3552704 kB' 'Active(anon): 11612884 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643904 kB' 'Mapped: 221528 kB' 'Shmem: 10971388 kB' 'KReclaimable: 184364 kB' 'Slab: 460056 kB' 'SReclaimable: 184364 kB' 'SUnreclaim: 275692 kB' 'KernelStack: 10032 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12659396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190032 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.211 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.211 14:07:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.212 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.212 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.475 14:07:42 -- setup/common.sh@33 -- # echo 0 00:03:01.475 14:07:42 -- setup/common.sh@33 -- # return 0 00:03:01.475 14:07:42 -- setup/hugepages.sh@97 -- # anon=0 00:03:01.475 14:07:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:01.475 14:07:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.475 14:07:42 -- setup/common.sh@18 -- # local node= 00:03:01.475 14:07:42 -- setup/common.sh@19 -- # local var val 00:03:01.475 14:07:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.475 14:07:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.475 14:07:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.475 14:07:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.475 14:07:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.475 14:07:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.475 14:07:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32320244 kB' 'MemAvailable: 36691460 kB' 'Buffers: 2696 kB' 'Cached: 15716612 kB' 'SwapCached: 0 kB' 'Active: 12811908 kB' 'Inactive: 3552704 kB' 'Active(anon): 11616696 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647708 kB' 'Mapped: 221472 kB' 'Shmem: 10971392 kB' 'KReclaimable: 184364 kB' 'Slab: 460064 kB' 'SReclaimable: 184364 kB' 'SUnreclaim: 275700 kB' 'KernelStack: 10160 kB' 'PageTables: 9132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12662324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189956 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.475 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.475 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.476 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.476 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.477 14:07:42 -- setup/common.sh@33 -- # echo 0 00:03:01.477 14:07:42 -- setup/common.sh@33 -- # return 0 00:03:01.477 14:07:42 -- setup/hugepages.sh@99 -- # surp=0 00:03:01.477 14:07:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:01.477 14:07:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:01.477 14:07:42 -- setup/common.sh@18 -- # local node= 00:03:01.477 14:07:42 -- setup/common.sh@19 -- # local var val 00:03:01.477 14:07:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.477 14:07:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.477 14:07:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.477 14:07:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.477 14:07:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.477 14:07:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32322908 kB' 'MemAvailable: 36694124 kB' 'Buffers: 2696 kB' 'Cached: 15716616 kB' 'SwapCached: 0 kB' 'Active: 12810408 kB' 'Inactive: 3552704 kB' 'Active(anon): 11615196 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646924 kB' 'Mapped: 221788 kB' 'Shmem: 10971396 kB' 'KReclaimable: 184364 kB' 'Slab: 460008 kB' 'SReclaimable: 184364 kB' 'SUnreclaim: 275644 kB' 'KernelStack: 9792 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12661444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189812 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.477 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.477 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.478 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.478 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.479 14:07:42 -- setup/common.sh@33 -- # echo 0 00:03:01.479 14:07:42 -- setup/common.sh@33 -- # return 0 00:03:01.479 14:07:42 -- setup/hugepages.sh@100 -- # resv=0 00:03:01.479 14:07:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:01.479 nr_hugepages=1024 00:03:01.479 14:07:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:01.479 resv_hugepages=0 00:03:01.479 14:07:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:01.479 surplus_hugepages=0 00:03:01.479 14:07:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:01.479 anon_hugepages=0 00:03:01.479 14:07:42 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:01.479 14:07:42 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:01.479 14:07:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:01.479 14:07:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:01.479 14:07:42 -- setup/common.sh@18 -- # local node= 00:03:01.479 14:07:42 -- setup/common.sh@19 -- # local var val 00:03:01.479 14:07:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.479 14:07:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.479 14:07:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.479 14:07:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.479 14:07:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.479 14:07:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32322892 kB' 'MemAvailable: 36694108 kB' 'Buffers: 2696 kB' 'Cached: 15716636 kB' 'SwapCached: 0 kB' 'Active: 12806768 kB' 'Inactive: 3552704 kB' 'Active(anon): 11611556 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643296 kB' 'Mapped: 221680 kB' 'Shmem: 10971416 kB' 'KReclaimable: 184364 kB' 'Slab: 459928 kB' 'SReclaimable: 184364 kB' 'SUnreclaim: 275564 kB' 'KernelStack: 10048 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12658544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189792 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.479 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.479 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.480 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.480 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.481 14:07:42 -- setup/common.sh@33 -- # echo 1024 00:03:01.481 14:07:42 -- setup/common.sh@33 -- # return 0 00:03:01.481 14:07:42 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:01.481 14:07:42 -- setup/hugepages.sh@112 -- # get_nodes 00:03:01.481 14:07:42 -- setup/hugepages.sh@27 -- # local node 00:03:01.481 14:07:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.481 14:07:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:01.481 14:07:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.481 14:07:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:01.481 14:07:42 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:01.481 14:07:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:01.481 14:07:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.481 14:07:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.481 14:07:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:01.481 14:07:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.481 14:07:42 -- setup/common.sh@18 -- # local node=0 00:03:01.481 14:07:42 -- setup/common.sh@19 -- # local var val 00:03:01.481 14:07:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.481 14:07:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.481 14:07:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:01.481 14:07:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:01.481 14:07:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.481 14:07:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.481 14:07:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 17398552 kB' 'MemUsed: 15436140 kB' 'SwapCached: 0 kB' 'Active: 9179460 kB' 'Inactive: 3414024 kB' 'Active(anon): 8241416 kB' 'Inactive(anon): 0 kB' 'Active(file): 938044 kB' 'Inactive(file): 3414024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12152736 kB' 'Mapped: 133928 kB' 'AnonPages: 444316 kB' 'Shmem: 7800668 kB' 'KernelStack: 6056 kB' 'PageTables: 5156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121756 kB' 'Slab: 274664 kB' 'SReclaimable: 121756 kB' 'SUnreclaim: 152908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.481 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.481 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # continue 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.482 14:07:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.482 14:07:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.482 14:07:42 -- setup/common.sh@33 -- # echo 0 00:03:01.482 14:07:42 -- setup/common.sh@33 -- # return 0 00:03:01.482 14:07:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.482 14:07:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.482 14:07:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.482 14:07:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.482 14:07:42 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:01.482 node0=1024 expecting 1024 00:03:01.482 14:07:42 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:01.482 00:03:01.482 real 0m2.138s 00:03:01.482 user 0m0.492s 00:03:01.482 sys 0m0.660s 00:03:01.482 14:07:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:01.482 14:07:42 -- common/autotest_common.sh@10 -- # set +x 00:03:01.482 ************************************ 00:03:01.482 END TEST default_setup 00:03:01.482 ************************************ 00:03:01.482 14:07:42 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:01.482 14:07:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:01.482 14:07:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:01.482 14:07:42 -- common/autotest_common.sh@10 -- # set +x 00:03:01.482 ************************************ 00:03:01.482 START TEST per_node_1G_alloc 00:03:01.482 ************************************ 00:03:01.482 14:07:43 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:03:01.482 14:07:43 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:01.482 14:07:43 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:01.482 14:07:43 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:01.482 14:07:43 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:01.482 14:07:43 -- setup/hugepages.sh@51 -- # shift 00:03:01.482 14:07:43 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:01.482 14:07:43 -- setup/hugepages.sh@52 -- # local node_ids 00:03:01.482 14:07:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:01.483 14:07:43 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:01.483 14:07:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:01.483 14:07:43 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:01.483 14:07:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:01.483 14:07:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:01.483 14:07:43 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:01.483 14:07:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:01.483 14:07:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:01.483 14:07:43 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:01.483 14:07:43 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:01.483 14:07:43 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:01.483 14:07:43 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:01.483 14:07:43 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:01.483 14:07:43 -- setup/hugepages.sh@73 -- # return 0 00:03:01.483 14:07:43 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:01.483 14:07:43 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:01.483 14:07:43 -- setup/hugepages.sh@146 -- # setup output 00:03:01.483 14:07:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.483 14:07:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:02.421 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:02.421 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:02.421 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:02.421 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:02.421 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:02.421 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:02.421 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:02.421 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:02.421 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:02.421 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:02.421 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:02.421 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:02.421 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:02.421 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:02.421 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:02.421 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:02.421 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:02.684 14:07:44 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:02.684 14:07:44 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:02.684 14:07:44 -- setup/hugepages.sh@89 -- # local node 00:03:02.684 14:07:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:02.684 14:07:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:02.684 14:07:44 -- setup/hugepages.sh@92 -- # local surp 00:03:02.684 14:07:44 -- setup/hugepages.sh@93 -- # local resv 00:03:02.684 14:07:44 -- setup/hugepages.sh@94 -- # local anon 00:03:02.684 14:07:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:02.684 14:07:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:02.684 14:07:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:02.684 14:07:44 -- setup/common.sh@18 -- # local node= 00:03:02.684 14:07:44 -- setup/common.sh@19 -- # local var val 00:03:02.684 14:07:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.684 14:07:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.684 14:07:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.684 14:07:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.684 14:07:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.684 14:07:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32331168 kB' 'MemAvailable: 36702384 kB' 'Buffers: 2696 kB' 'Cached: 15716692 kB' 'SwapCached: 0 kB' 'Active: 12810292 kB' 'Inactive: 3552704 kB' 'Active(anon): 11615080 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646816 kB' 'Mapped: 221632 kB' 'Shmem: 10971472 kB' 'KReclaimable: 184364 kB' 'Slab: 459948 kB' 'SReclaimable: 184364 kB' 'SUnreclaim: 275584 kB' 'KernelStack: 10112 kB' 'PageTables: 9076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12661476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189892 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.684 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.684 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.685 14:07:44 -- setup/common.sh@33 -- # echo 0 00:03:02.685 14:07:44 -- setup/common.sh@33 -- # return 0 00:03:02.685 14:07:44 -- setup/hugepages.sh@97 -- # anon=0 00:03:02.685 14:07:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:02.685 14:07:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.685 14:07:44 -- setup/common.sh@18 -- # local node= 00:03:02.685 14:07:44 -- setup/common.sh@19 -- # local var val 00:03:02.685 14:07:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.685 14:07:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.685 14:07:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.685 14:07:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.685 14:07:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.685 14:07:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32338040 kB' 'MemAvailable: 36709256 kB' 'Buffers: 2696 kB' 'Cached: 15716692 kB' 'SwapCached: 0 kB' 'Active: 12810636 kB' 'Inactive: 3552704 kB' 'Active(anon): 11615424 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647316 kB' 'Mapped: 221772 kB' 'Shmem: 10971472 kB' 'KReclaimable: 184364 kB' 'Slab: 459980 kB' 'SReclaimable: 184364 kB' 'SUnreclaim: 275616 kB' 'KernelStack: 10080 kB' 'PageTables: 9000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12661488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189844 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.685 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.685 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.686 14:07:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.686 14:07:44 -- setup/common.sh@33 -- # echo 0 00:03:02.686 14:07:44 -- setup/common.sh@33 -- # return 0 00:03:02.686 14:07:44 -- setup/hugepages.sh@99 -- # surp=0 00:03:02.686 14:07:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:02.686 14:07:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:02.686 14:07:44 -- setup/common.sh@18 -- # local node= 00:03:02.686 14:07:44 -- setup/common.sh@19 -- # local var val 00:03:02.686 14:07:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.686 14:07:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.686 14:07:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.686 14:07:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.686 14:07:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.686 14:07:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.686 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32341528 kB' 'MemAvailable: 36712744 kB' 'Buffers: 2696 kB' 'Cached: 15716692 kB' 'SwapCached: 0 kB' 'Active: 12804696 kB' 'Inactive: 3552704 kB' 'Active(anon): 11609484 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641372 kB' 'Mapped: 221336 kB' 'Shmem: 10971472 kB' 'KReclaimable: 184364 kB' 'Slab: 459980 kB' 'SReclaimable: 184364 kB' 'SUnreclaim: 275616 kB' 'KernelStack: 10080 kB' 'PageTables: 8996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12655380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189824 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.687 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.687 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.688 14:07:44 -- setup/common.sh@33 -- # echo 0 00:03:02.688 14:07:44 -- setup/common.sh@33 -- # return 0 00:03:02.688 14:07:44 -- setup/hugepages.sh@100 -- # resv=0 00:03:02.688 14:07:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:02.688 nr_hugepages=1024 00:03:02.688 14:07:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:02.688 resv_hugepages=0 00:03:02.688 14:07:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:02.688 surplus_hugepages=0 00:03:02.688 14:07:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:02.688 anon_hugepages=0 00:03:02.688 14:07:44 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:02.688 14:07:44 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:02.688 14:07:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:02.688 14:07:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:02.688 14:07:44 -- setup/common.sh@18 -- # local node= 00:03:02.688 14:07:44 -- setup/common.sh@19 -- # local var val 00:03:02.688 14:07:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.688 14:07:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.688 14:07:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.688 14:07:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.688 14:07:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.688 14:07:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32342000 kB' 'MemAvailable: 36713216 kB' 'Buffers: 2696 kB' 'Cached: 15716704 kB' 'SwapCached: 0 kB' 'Active: 12804404 kB' 'Inactive: 3552704 kB' 'Active(anon): 11609192 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641060 kB' 'Mapped: 220796 kB' 'Shmem: 10971484 kB' 'KReclaimable: 184364 kB' 'Slab: 459996 kB' 'SReclaimable: 184364 kB' 'SUnreclaim: 275632 kB' 'KernelStack: 10080 kB' 'PageTables: 8964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12655396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189824 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.688 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.688 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.689 14:07:44 -- setup/common.sh@33 -- # echo 1024 00:03:02.689 14:07:44 -- setup/common.sh@33 -- # return 0 00:03:02.689 14:07:44 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:02.689 14:07:44 -- setup/hugepages.sh@112 -- # get_nodes 00:03:02.689 14:07:44 -- setup/hugepages.sh@27 -- # local node 00:03:02.689 14:07:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.689 14:07:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:02.689 14:07:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.689 14:07:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:02.689 14:07:44 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:02.689 14:07:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:02.689 14:07:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:02.689 14:07:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:02.689 14:07:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:02.689 14:07:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.689 14:07:44 -- setup/common.sh@18 -- # local node=0 00:03:02.689 14:07:44 -- setup/common.sh@19 -- # local var val 00:03:02.689 14:07:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.689 14:07:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.689 14:07:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:02.689 14:07:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:02.689 14:07:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.689 14:07:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 18443424 kB' 'MemUsed: 14391268 kB' 'SwapCached: 0 kB' 'Active: 9172856 kB' 'Inactive: 3414024 kB' 'Active(anon): 8234812 kB' 'Inactive(anon): 0 kB' 'Active(file): 938044 kB' 'Inactive(file): 3414024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12152796 kB' 'Mapped: 133520 kB' 'AnonPages: 437352 kB' 'Shmem: 7800728 kB' 'KernelStack: 6120 kB' 'PageTables: 5076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121756 kB' 'Slab: 274644 kB' 'SReclaimable: 121756 kB' 'SUnreclaim: 152888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.689 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.689 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@33 -- # echo 0 00:03:02.690 14:07:44 -- setup/common.sh@33 -- # return 0 00:03:02.690 14:07:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:02.690 14:07:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:02.690 14:07:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:02.690 14:07:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:02.690 14:07:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.690 14:07:44 -- setup/common.sh@18 -- # local node=1 00:03:02.690 14:07:44 -- setup/common.sh@19 -- # local var val 00:03:02.690 14:07:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.690 14:07:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.690 14:07:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:02.690 14:07:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:02.690 14:07:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.690 14:07:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456488 kB' 'MemFree: 13898576 kB' 'MemUsed: 5557912 kB' 'SwapCached: 0 kB' 'Active: 3631184 kB' 'Inactive: 138680 kB' 'Active(anon): 3374016 kB' 'Inactive(anon): 0 kB' 'Active(file): 257168 kB' 'Inactive(file): 138680 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3566624 kB' 'Mapped: 87276 kB' 'AnonPages: 203308 kB' 'Shmem: 3170776 kB' 'KernelStack: 3976 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62608 kB' 'Slab: 185352 kB' 'SReclaimable: 62608 kB' 'SUnreclaim: 122744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.690 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.690 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # continue 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.691 14:07:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.691 14:07:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.691 14:07:44 -- setup/common.sh@33 -- # echo 0 00:03:02.691 14:07:44 -- setup/common.sh@33 -- # return 0 00:03:02.691 14:07:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:02.691 14:07:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:02.691 14:07:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.691 14:07:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.691 14:07:44 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:02.691 node0=512 expecting 512 00:03:02.691 14:07:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:02.691 14:07:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.691 14:07:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.691 14:07:44 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:02.691 node1=512 expecting 512 00:03:02.691 14:07:44 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:02.691 00:03:02.691 real 0m1.178s 00:03:02.691 user 0m0.552s 00:03:02.691 sys 0m0.657s 00:03:02.691 14:07:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:02.691 14:07:44 -- common/autotest_common.sh@10 -- # set +x 00:03:02.691 ************************************ 00:03:02.691 END TEST per_node_1G_alloc 00:03:02.691 ************************************ 00:03:02.691 14:07:44 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:02.691 14:07:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:02.691 14:07:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:02.691 14:07:44 -- common/autotest_common.sh@10 -- # set +x 00:03:02.949 ************************************ 00:03:02.949 START TEST even_2G_alloc 00:03:02.949 ************************************ 00:03:02.949 14:07:44 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:03:02.949 14:07:44 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:02.949 14:07:44 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:02.949 14:07:44 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:02.949 14:07:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:02.949 14:07:44 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:02.949 14:07:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:02.949 14:07:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:02.949 14:07:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:02.949 14:07:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:02.949 14:07:44 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:02.949 14:07:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:02.949 14:07:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:02.949 14:07:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:02.949 14:07:44 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:02.949 14:07:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:02.949 14:07:44 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:02.949 14:07:44 -- setup/hugepages.sh@83 -- # : 512 00:03:02.949 14:07:44 -- setup/hugepages.sh@84 -- # : 1 00:03:02.949 14:07:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:02.949 14:07:44 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:02.949 14:07:44 -- setup/hugepages.sh@83 -- # : 0 00:03:02.949 14:07:44 -- setup/hugepages.sh@84 -- # : 0 00:03:02.949 14:07:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:02.949 14:07:44 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:02.949 14:07:44 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:02.949 14:07:44 -- setup/hugepages.sh@153 -- # setup output 00:03:02.949 14:07:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.949 14:07:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:03.890 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:03.890 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:03.890 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:03.890 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:03.890 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:03.890 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:03.890 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:03.890 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:03.890 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:03.890 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:03.890 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:03.890 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:03.890 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:03.890 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:03.890 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:03.890 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:03.890 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:03.890 14:07:45 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:03.890 14:07:45 -- setup/hugepages.sh@89 -- # local node 00:03:03.890 14:07:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:03.890 14:07:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:03.890 14:07:45 -- setup/hugepages.sh@92 -- # local surp 00:03:03.890 14:07:45 -- setup/hugepages.sh@93 -- # local resv 00:03:03.890 14:07:45 -- setup/hugepages.sh@94 -- # local anon 00:03:03.890 14:07:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:03.890 14:07:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:03.890 14:07:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:03.890 14:07:45 -- setup/common.sh@18 -- # local node= 00:03:03.890 14:07:45 -- setup/common.sh@19 -- # local var val 00:03:03.890 14:07:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.890 14:07:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.890 14:07:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.890 14:07:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.890 14:07:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.890 14:07:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.890 14:07:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32351612 kB' 'MemAvailable: 36722828 kB' 'Buffers: 2696 kB' 'Cached: 15716788 kB' 'SwapCached: 0 kB' 'Active: 12804652 kB' 'Inactive: 3552704 kB' 'Active(anon): 11609440 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641128 kB' 'Mapped: 221272 kB' 'Shmem: 10971568 kB' 'KReclaimable: 184364 kB' 'Slab: 459820 kB' 'SReclaimable: 184364 kB' 'SUnreclaim: 275456 kB' 'KernelStack: 10080 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12655616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189872 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.890 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.890 14:07:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.891 14:07:45 -- setup/common.sh@33 -- # echo 0 00:03:03.891 14:07:45 -- setup/common.sh@33 -- # return 0 00:03:03.891 14:07:45 -- setup/hugepages.sh@97 -- # anon=0 00:03:03.891 14:07:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:03.891 14:07:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.891 14:07:45 -- setup/common.sh@18 -- # local node= 00:03:03.891 14:07:45 -- setup/common.sh@19 -- # local var val 00:03:03.891 14:07:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.891 14:07:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.891 14:07:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.891 14:07:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.891 14:07:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.891 14:07:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32354668 kB' 'MemAvailable: 36725884 kB' 'Buffers: 2696 kB' 'Cached: 15716788 kB' 'SwapCached: 0 kB' 'Active: 12804820 kB' 'Inactive: 3552704 kB' 'Active(anon): 11609608 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641380 kB' 'Mapped: 221272 kB' 'Shmem: 10971568 kB' 'KReclaimable: 184364 kB' 'Slab: 459820 kB' 'SReclaimable: 184364 kB' 'SUnreclaim: 275456 kB' 'KernelStack: 10080 kB' 'PageTables: 8980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12655628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189872 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.891 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.891 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.892 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.892 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.893 14:07:45 -- setup/common.sh@33 -- # echo 0 00:03:03.893 14:07:45 -- setup/common.sh@33 -- # return 0 00:03:03.893 14:07:45 -- setup/hugepages.sh@99 -- # surp=0 00:03:03.893 14:07:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:03.893 14:07:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:03.893 14:07:45 -- setup/common.sh@18 -- # local node= 00:03:03.893 14:07:45 -- setup/common.sh@19 -- # local var val 00:03:03.893 14:07:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.893 14:07:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.893 14:07:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.893 14:07:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.893 14:07:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.893 14:07:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32353660 kB' 'MemAvailable: 36724876 kB' 'Buffers: 2696 kB' 'Cached: 15716788 kB' 'SwapCached: 0 kB' 'Active: 12804688 kB' 'Inactive: 3552704 kB' 'Active(anon): 11609476 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641200 kB' 'Mapped: 221228 kB' 'Shmem: 10971568 kB' 'KReclaimable: 184364 kB' 'Slab: 459820 kB' 'SReclaimable: 184364 kB' 'SUnreclaim: 275456 kB' 'KernelStack: 10128 kB' 'PageTables: 9128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12655640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189888 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.893 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.893 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.894 14:07:45 -- setup/common.sh@33 -- # echo 0 00:03:03.894 14:07:45 -- setup/common.sh@33 -- # return 0 00:03:03.894 14:07:45 -- setup/hugepages.sh@100 -- # resv=0 00:03:03.894 14:07:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:03.894 nr_hugepages=1024 00:03:03.894 14:07:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:03.894 resv_hugepages=0 00:03:03.894 14:07:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:03.894 surplus_hugepages=0 00:03:03.894 14:07:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:03.894 anon_hugepages=0 00:03:03.894 14:07:45 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:03.894 14:07:45 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:03.894 14:07:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:03.894 14:07:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:03.894 14:07:45 -- setup/common.sh@18 -- # local node= 00:03:03.894 14:07:45 -- setup/common.sh@19 -- # local var val 00:03:03.894 14:07:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.894 14:07:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.894 14:07:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.894 14:07:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.894 14:07:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.894 14:07:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32353812 kB' 'MemAvailable: 36725028 kB' 'Buffers: 2696 kB' 'Cached: 15716816 kB' 'SwapCached: 0 kB' 'Active: 12804612 kB' 'Inactive: 3552704 kB' 'Active(anon): 11609400 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641068 kB' 'Mapped: 220820 kB' 'Shmem: 10971596 kB' 'KReclaimable: 184364 kB' 'Slab: 459812 kB' 'SReclaimable: 184364 kB' 'SUnreclaim: 275448 kB' 'KernelStack: 10096 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12655656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189888 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.894 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.894 14:07:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.895 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.895 14:07:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.896 14:07:45 -- setup/common.sh@33 -- # echo 1024 00:03:03.896 14:07:45 -- setup/common.sh@33 -- # return 0 00:03:03.896 14:07:45 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:03.896 14:07:45 -- setup/hugepages.sh@112 -- # get_nodes 00:03:03.896 14:07:45 -- setup/hugepages.sh@27 -- # local node 00:03:03.896 14:07:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.896 14:07:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:03.896 14:07:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.896 14:07:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:03.896 14:07:45 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:03.896 14:07:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:03.896 14:07:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.896 14:07:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.896 14:07:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:03.896 14:07:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.896 14:07:45 -- setup/common.sh@18 -- # local node=0 00:03:03.896 14:07:45 -- setup/common.sh@19 -- # local var val 00:03:03.896 14:07:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.896 14:07:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.896 14:07:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:03.896 14:07:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:03.896 14:07:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.896 14:07:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 18461068 kB' 'MemUsed: 14373624 kB' 'SwapCached: 0 kB' 'Active: 9172916 kB' 'Inactive: 3414024 kB' 'Active(anon): 8234872 kB' 'Inactive(anon): 0 kB' 'Active(file): 938044 kB' 'Inactive(file): 3414024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12152848 kB' 'Mapped: 133548 kB' 'AnonPages: 437232 kB' 'Shmem: 7800780 kB' 'KernelStack: 6088 kB' 'PageTables: 5068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121756 kB' 'Slab: 274404 kB' 'SReclaimable: 121756 kB' 'SUnreclaim: 152648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.896 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.896 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@33 -- # echo 0 00:03:03.897 14:07:45 -- setup/common.sh@33 -- # return 0 00:03:03.897 14:07:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.897 14:07:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.897 14:07:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.897 14:07:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:03.897 14:07:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.897 14:07:45 -- setup/common.sh@18 -- # local node=1 00:03:03.897 14:07:45 -- setup/common.sh@19 -- # local var val 00:03:03.897 14:07:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.897 14:07:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.897 14:07:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:03.897 14:07:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:03.897 14:07:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.897 14:07:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456488 kB' 'MemFree: 13892492 kB' 'MemUsed: 5563996 kB' 'SwapCached: 0 kB' 'Active: 3631724 kB' 'Inactive: 138680 kB' 'Active(anon): 3374556 kB' 'Inactive(anon): 0 kB' 'Active(file): 257168 kB' 'Inactive(file): 138680 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3566688 kB' 'Mapped: 87272 kB' 'AnonPages: 203788 kB' 'Shmem: 3170840 kB' 'KernelStack: 3992 kB' 'PageTables: 3888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62608 kB' 'Slab: 185408 kB' 'SReclaimable: 62608 kB' 'SUnreclaim: 122800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.897 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.897 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # continue 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.898 14:07:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.898 14:07:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.898 14:07:45 -- setup/common.sh@33 -- # echo 0 00:03:03.898 14:07:45 -- setup/common.sh@33 -- # return 0 00:03:03.898 14:07:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.898 14:07:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.898 14:07:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.898 14:07:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.898 14:07:45 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:03.898 node0=512 expecting 512 00:03:03.898 14:07:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.898 14:07:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.898 14:07:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.898 14:07:45 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:03.898 node1=512 expecting 512 00:03:03.898 14:07:45 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:03.898 00:03:03.898 real 0m1.110s 00:03:03.898 user 0m0.515s 00:03:03.898 sys 0m0.621s 00:03:03.898 14:07:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:03.898 14:07:45 -- common/autotest_common.sh@10 -- # set +x 00:03:03.898 ************************************ 00:03:03.898 END TEST even_2G_alloc 00:03:03.898 ************************************ 00:03:04.157 14:07:45 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:04.157 14:07:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:04.157 14:07:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:04.157 14:07:45 -- common/autotest_common.sh@10 -- # set +x 00:03:04.157 ************************************ 00:03:04.157 START TEST odd_alloc 00:03:04.157 ************************************ 00:03:04.157 14:07:45 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:04.157 14:07:45 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:04.157 14:07:45 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:04.157 14:07:45 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:04.158 14:07:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:04.158 14:07:45 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:04.158 14:07:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:04.158 14:07:45 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:04.158 14:07:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:04.158 14:07:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:04.158 14:07:45 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:04.158 14:07:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:04.158 14:07:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:04.158 14:07:45 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:04.158 14:07:45 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:04.158 14:07:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:04.158 14:07:45 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:04.158 14:07:45 -- setup/hugepages.sh@83 -- # : 513 00:03:04.158 14:07:45 -- setup/hugepages.sh@84 -- # : 1 00:03:04.158 14:07:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:04.158 14:07:45 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:04.158 14:07:45 -- setup/hugepages.sh@83 -- # : 0 00:03:04.158 14:07:45 -- setup/hugepages.sh@84 -- # : 0 00:03:04.158 14:07:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:04.158 14:07:45 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:04.158 14:07:45 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:04.158 14:07:45 -- setup/hugepages.sh@160 -- # setup output 00:03:04.158 14:07:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.158 14:07:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.094 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:05.094 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:05.094 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:05.094 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:05.094 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:05.094 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:05.094 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:05.094 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:05.094 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:05.094 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:05.094 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:05.094 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:05.094 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:05.094 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:05.094 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:05.094 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:05.094 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:05.094 14:07:46 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:05.094 14:07:46 -- setup/hugepages.sh@89 -- # local node 00:03:05.094 14:07:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:05.094 14:07:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:05.094 14:07:46 -- setup/hugepages.sh@92 -- # local surp 00:03:05.094 14:07:46 -- setup/hugepages.sh@93 -- # local resv 00:03:05.094 14:07:46 -- setup/hugepages.sh@94 -- # local anon 00:03:05.094 14:07:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:05.094 14:07:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:05.094 14:07:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:05.094 14:07:46 -- setup/common.sh@18 -- # local node= 00:03:05.094 14:07:46 -- setup/common.sh@19 -- # local var val 00:03:05.094 14:07:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.094 14:07:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.094 14:07:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.094 14:07:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.094 14:07:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.094 14:07:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.094 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.094 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32314176 kB' 'MemAvailable: 36685388 kB' 'Buffers: 2696 kB' 'Cached: 15716884 kB' 'SwapCached: 0 kB' 'Active: 12805456 kB' 'Inactive: 3552704 kB' 'Active(anon): 11610244 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641832 kB' 'Mapped: 220924 kB' 'Shmem: 10971664 kB' 'KReclaimable: 184356 kB' 'Slab: 459720 kB' 'SReclaimable: 184356 kB' 'SUnreclaim: 275364 kB' 'KernelStack: 10144 kB' 'PageTables: 9460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12655840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190000 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.095 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.095 14:07:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.096 14:07:46 -- setup/common.sh@33 -- # echo 0 00:03:05.096 14:07:46 -- setup/common.sh@33 -- # return 0 00:03:05.096 14:07:46 -- setup/hugepages.sh@97 -- # anon=0 00:03:05.096 14:07:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:05.096 14:07:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.096 14:07:46 -- setup/common.sh@18 -- # local node= 00:03:05.096 14:07:46 -- setup/common.sh@19 -- # local var val 00:03:05.096 14:07:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.096 14:07:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.096 14:07:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.096 14:07:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.096 14:07:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.096 14:07:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32322028 kB' 'MemAvailable: 36693240 kB' 'Buffers: 2696 kB' 'Cached: 15716884 kB' 'SwapCached: 0 kB' 'Active: 12806148 kB' 'Inactive: 3552704 kB' 'Active(anon): 11610936 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642184 kB' 'Mapped: 220924 kB' 'Shmem: 10971664 kB' 'KReclaimable: 184356 kB' 'Slab: 459720 kB' 'SReclaimable: 184356 kB' 'SUnreclaim: 275364 kB' 'KernelStack: 10192 kB' 'PageTables: 9636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12655852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190016 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.096 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.096 14:07:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.097 14:07:46 -- setup/common.sh@33 -- # echo 0 00:03:05.097 14:07:46 -- setup/common.sh@33 -- # return 0 00:03:05.097 14:07:46 -- setup/hugepages.sh@99 -- # surp=0 00:03:05.097 14:07:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:05.097 14:07:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:05.097 14:07:46 -- setup/common.sh@18 -- # local node= 00:03:05.097 14:07:46 -- setup/common.sh@19 -- # local var val 00:03:05.097 14:07:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.097 14:07:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.097 14:07:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.097 14:07:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.097 14:07:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.097 14:07:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32325344 kB' 'MemAvailable: 36696556 kB' 'Buffers: 2696 kB' 'Cached: 15716900 kB' 'SwapCached: 0 kB' 'Active: 12800580 kB' 'Inactive: 3552704 kB' 'Active(anon): 11605368 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637012 kB' 'Mapped: 220092 kB' 'Shmem: 10971680 kB' 'KReclaimable: 184356 kB' 'Slab: 459744 kB' 'SReclaimable: 184356 kB' 'SUnreclaim: 275388 kB' 'KernelStack: 10112 kB' 'PageTables: 9296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12633776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189856 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.097 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.097 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.098 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.098 14:07:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.098 14:07:46 -- setup/common.sh@33 -- # echo 0 00:03:05.098 14:07:46 -- setup/common.sh@33 -- # return 0 00:03:05.360 14:07:46 -- setup/hugepages.sh@100 -- # resv=0 00:03:05.360 14:07:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:05.360 nr_hugepages=1025 00:03:05.360 14:07:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:05.360 resv_hugepages=0 00:03:05.360 14:07:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:05.360 surplus_hugepages=0 00:03:05.360 14:07:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:05.360 anon_hugepages=0 00:03:05.360 14:07:46 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:05.360 14:07:46 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:05.360 14:07:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:05.360 14:07:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:05.360 14:07:46 -- setup/common.sh@18 -- # local node= 00:03:05.360 14:07:46 -- setup/common.sh@19 -- # local var val 00:03:05.360 14:07:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.360 14:07:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.360 14:07:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.360 14:07:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.360 14:07:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.360 14:07:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.360 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.360 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.360 14:07:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32325484 kB' 'MemAvailable: 36696696 kB' 'Buffers: 2696 kB' 'Cached: 15716912 kB' 'SwapCached: 0 kB' 'Active: 12800004 kB' 'Inactive: 3552704 kB' 'Active(anon): 11604792 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636340 kB' 'Mapped: 219992 kB' 'Shmem: 10971692 kB' 'KReclaimable: 184356 kB' 'Slab: 459736 kB' 'SReclaimable: 184356 kB' 'SUnreclaim: 275380 kB' 'KernelStack: 10128 kB' 'PageTables: 9324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12633788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189840 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:05.360 14:07:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.360 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.360 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.360 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.360 14:07:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.360 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.360 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.360 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.360 14:07:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.360 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.360 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.360 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.360 14:07:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.360 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.361 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.361 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.362 14:07:46 -- setup/common.sh@33 -- # echo 1025 00:03:05.362 14:07:46 -- setup/common.sh@33 -- # return 0 00:03:05.362 14:07:46 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:05.362 14:07:46 -- setup/hugepages.sh@112 -- # get_nodes 00:03:05.362 14:07:46 -- setup/hugepages.sh@27 -- # local node 00:03:05.362 14:07:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.362 14:07:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:05.362 14:07:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.362 14:07:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:05.362 14:07:46 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:05.362 14:07:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:05.362 14:07:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.362 14:07:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.362 14:07:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:05.362 14:07:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.362 14:07:46 -- setup/common.sh@18 -- # local node=0 00:03:05.362 14:07:46 -- setup/common.sh@19 -- # local var val 00:03:05.362 14:07:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.362 14:07:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.362 14:07:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:05.362 14:07:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:05.362 14:07:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.362 14:07:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 18451012 kB' 'MemUsed: 14383680 kB' 'SwapCached: 0 kB' 'Active: 9169652 kB' 'Inactive: 3414024 kB' 'Active(anon): 8231608 kB' 'Inactive(anon): 0 kB' 'Active(file): 938044 kB' 'Inactive(file): 3414024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12152884 kB' 'Mapped: 133472 kB' 'AnonPages: 433932 kB' 'Shmem: 7800816 kB' 'KernelStack: 6120 kB' 'PageTables: 5312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121756 kB' 'Slab: 274332 kB' 'SReclaimable: 121756 kB' 'SUnreclaim: 152576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.362 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.362 14:07:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.363 14:07:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.363 14:07:46 -- setup/common.sh@33 -- # echo 0 00:03:05.363 14:07:46 -- setup/common.sh@33 -- # return 0 00:03:05.363 14:07:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.363 14:07:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.363 14:07:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.363 14:07:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:05.363 14:07:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.363 14:07:46 -- setup/common.sh@18 -- # local node=1 00:03:05.363 14:07:46 -- setup/common.sh@19 -- # local var val 00:03:05.363 14:07:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.363 14:07:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.363 14:07:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:05.363 14:07:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:05.363 14:07:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.363 14:07:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.363 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456488 kB' 'MemFree: 13875196 kB' 'MemUsed: 5581292 kB' 'SwapCached: 0 kB' 'Active: 3629992 kB' 'Inactive: 138680 kB' 'Active(anon): 3372824 kB' 'Inactive(anon): 0 kB' 'Active(file): 257168 kB' 'Inactive(file): 138680 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3566752 kB' 'Mapped: 86124 kB' 'AnonPages: 201964 kB' 'Shmem: 3170904 kB' 'KernelStack: 3912 kB' 'PageTables: 3668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62568 kB' 'Slab: 185348 kB' 'SReclaimable: 62568 kB' 'SUnreclaim: 122780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.364 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.364 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.365 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 14:07:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.365 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 14:07:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.365 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 14:07:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 14:07:46 -- setup/common.sh@32 -- # continue 00:03:05.365 14:07:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.365 14:07:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.365 14:07:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.365 14:07:46 -- setup/common.sh@33 -- # echo 0 00:03:05.365 14:07:46 -- setup/common.sh@33 -- # return 0 00:03:05.365 14:07:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.365 14:07:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.365 14:07:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.365 14:07:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.365 14:07:46 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:05.365 node0=512 expecting 513 00:03:05.365 14:07:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.365 14:07:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.365 14:07:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.365 14:07:46 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:05.365 node1=513 expecting 512 00:03:05.365 14:07:46 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:05.365 00:03:05.365 real 0m1.160s 00:03:05.365 user 0m0.522s 00:03:05.365 sys 0m0.669s 00:03:05.365 14:07:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:05.365 14:07:46 -- common/autotest_common.sh@10 -- # set +x 00:03:05.365 ************************************ 00:03:05.365 END TEST odd_alloc 00:03:05.365 ************************************ 00:03:05.365 14:07:46 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:05.365 14:07:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:05.365 14:07:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:05.365 14:07:46 -- common/autotest_common.sh@10 -- # set +x 00:03:05.365 ************************************ 00:03:05.365 START TEST custom_alloc 00:03:05.365 ************************************ 00:03:05.365 14:07:46 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:05.365 14:07:46 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:05.365 14:07:46 -- setup/hugepages.sh@169 -- # local node 00:03:05.365 14:07:46 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:05.365 14:07:46 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:05.365 14:07:46 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:05.365 14:07:46 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:05.365 14:07:46 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:05.365 14:07:46 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:05.365 14:07:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:05.365 14:07:46 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:05.365 14:07:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:05.365 14:07:46 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:05.365 14:07:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:05.365 14:07:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:05.365 14:07:46 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:05.365 14:07:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:05.365 14:07:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:05.365 14:07:46 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:05.365 14:07:46 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:05.365 14:07:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.365 14:07:46 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:05.365 14:07:46 -- setup/hugepages.sh@83 -- # : 256 00:03:05.365 14:07:46 -- setup/hugepages.sh@84 -- # : 1 00:03:05.365 14:07:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.365 14:07:46 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:05.365 14:07:46 -- setup/hugepages.sh@83 -- # : 0 00:03:05.365 14:07:46 -- setup/hugepages.sh@84 -- # : 0 00:03:05.365 14:07:46 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.365 14:07:46 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:05.365 14:07:46 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:05.365 14:07:46 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:05.365 14:07:46 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:05.365 14:07:46 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:05.365 14:07:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:05.365 14:07:46 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:05.365 14:07:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:05.365 14:07:46 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:05.365 14:07:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:05.365 14:07:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:05.365 14:07:46 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:05.365 14:07:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:05.365 14:07:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:05.365 14:07:46 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:05.365 14:07:46 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:05.365 14:07:46 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:05.365 14:07:46 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:05.365 14:07:46 -- setup/hugepages.sh@78 -- # return 0 00:03:05.365 14:07:46 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:05.365 14:07:46 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:05.365 14:07:46 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:05.365 14:07:46 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:05.365 14:07:46 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:05.365 14:07:46 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:05.365 14:07:46 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:05.365 14:07:46 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:05.365 14:07:46 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:05.365 14:07:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:05.365 14:07:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:05.365 14:07:46 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:05.365 14:07:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:05.365 14:07:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:05.365 14:07:46 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:05.365 14:07:46 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:05.365 14:07:46 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:05.365 14:07:46 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:05.365 14:07:46 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:05.365 14:07:46 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:05.365 14:07:46 -- setup/hugepages.sh@78 -- # return 0 00:03:05.365 14:07:46 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:05.365 14:07:46 -- setup/hugepages.sh@187 -- # setup output 00:03:05.365 14:07:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.365 14:07:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:06.309 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:06.309 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:06.309 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:06.309 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:06.309 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:06.309 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:06.309 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:06.309 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:06.309 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:06.309 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:06.310 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:06.310 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:06.310 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:06.310 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:06.310 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:06.310 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:06.310 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:06.310 14:07:47 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:06.310 14:07:47 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:06.310 14:07:47 -- setup/hugepages.sh@89 -- # local node 00:03:06.310 14:07:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:06.310 14:07:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:06.310 14:07:47 -- setup/hugepages.sh@92 -- # local surp 00:03:06.310 14:07:47 -- setup/hugepages.sh@93 -- # local resv 00:03:06.310 14:07:47 -- setup/hugepages.sh@94 -- # local anon 00:03:06.310 14:07:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:06.572 14:07:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:06.572 14:07:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:06.572 14:07:47 -- setup/common.sh@18 -- # local node= 00:03:06.572 14:07:47 -- setup/common.sh@19 -- # local var val 00:03:06.572 14:07:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.572 14:07:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.572 14:07:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.572 14:07:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.572 14:07:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.572 14:07:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.572 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.572 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.572 14:07:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 31271536 kB' 'MemAvailable: 35642732 kB' 'Buffers: 2696 kB' 'Cached: 15716980 kB' 'SwapCached: 0 kB' 'Active: 12799732 kB' 'Inactive: 3552704 kB' 'Active(anon): 11604520 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635940 kB' 'Mapped: 219652 kB' 'Shmem: 10971760 kB' 'KReclaimable: 184324 kB' 'Slab: 459584 kB' 'SReclaimable: 184324 kB' 'SUnreclaim: 275260 kB' 'KernelStack: 10000 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12633844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189872 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:06.572 14:07:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.572 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.572 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.572 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.572 14:07:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.572 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.573 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.573 14:07:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.573 14:07:47 -- setup/common.sh@33 -- # echo 0 00:03:06.573 14:07:47 -- setup/common.sh@33 -- # return 0 00:03:06.573 14:07:47 -- setup/hugepages.sh@97 -- # anon=0 00:03:06.573 14:07:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:06.573 14:07:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.573 14:07:47 -- setup/common.sh@18 -- # local node= 00:03:06.573 14:07:47 -- setup/common.sh@19 -- # local var val 00:03:06.573 14:07:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.574 14:07:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.574 14:07:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.574 14:07:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.574 14:07:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.574 14:07:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 31271536 kB' 'MemAvailable: 35642732 kB' 'Buffers: 2696 kB' 'Cached: 15716980 kB' 'SwapCached: 0 kB' 'Active: 12800124 kB' 'Inactive: 3552704 kB' 'Active(anon): 11604912 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636332 kB' 'Mapped: 219652 kB' 'Shmem: 10971760 kB' 'KReclaimable: 184324 kB' 'Slab: 459584 kB' 'SReclaimable: 184324 kB' 'SUnreclaim: 275260 kB' 'KernelStack: 9968 kB' 'PageTables: 8236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12633856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189824 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.574 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.574 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.575 14:07:47 -- setup/common.sh@33 -- # echo 0 00:03:06.575 14:07:47 -- setup/common.sh@33 -- # return 0 00:03:06.575 14:07:47 -- setup/hugepages.sh@99 -- # surp=0 00:03:06.575 14:07:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:06.575 14:07:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:06.575 14:07:47 -- setup/common.sh@18 -- # local node= 00:03:06.575 14:07:47 -- setup/common.sh@19 -- # local var val 00:03:06.575 14:07:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.575 14:07:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.575 14:07:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.575 14:07:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.575 14:07:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.575 14:07:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 31272236 kB' 'MemAvailable: 35643432 kB' 'Buffers: 2696 kB' 'Cached: 15716992 kB' 'SwapCached: 0 kB' 'Active: 12799100 kB' 'Inactive: 3552704 kB' 'Active(anon): 11603888 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635260 kB' 'Mapped: 219616 kB' 'Shmem: 10971772 kB' 'KReclaimable: 184324 kB' 'Slab: 459584 kB' 'SReclaimable: 184324 kB' 'SUnreclaim: 275260 kB' 'KernelStack: 10000 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12633872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189808 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.575 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.575 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.576 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.576 14:07:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.576 14:07:47 -- setup/common.sh@33 -- # echo 0 00:03:06.576 14:07:47 -- setup/common.sh@33 -- # return 0 00:03:06.576 14:07:47 -- setup/hugepages.sh@100 -- # resv=0 00:03:06.576 14:07:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:06.576 nr_hugepages=1536 00:03:06.576 14:07:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:06.576 resv_hugepages=0 00:03:06.576 14:07:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:06.576 surplus_hugepages=0 00:03:06.576 14:07:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:06.576 anon_hugepages=0 00:03:06.576 14:07:47 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:06.576 14:07:47 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:06.576 14:07:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:06.576 14:07:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:06.576 14:07:47 -- setup/common.sh@18 -- # local node= 00:03:06.576 14:07:47 -- setup/common.sh@19 -- # local var val 00:03:06.576 14:07:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.576 14:07:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.576 14:07:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.576 14:07:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.576 14:07:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.577 14:07:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 31272236 kB' 'MemAvailable: 35643432 kB' 'Buffers: 2696 kB' 'Cached: 15717008 kB' 'SwapCached: 0 kB' 'Active: 12799340 kB' 'Inactive: 3552704 kB' 'Active(anon): 11604128 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635520 kB' 'Mapped: 219616 kB' 'Shmem: 10971788 kB' 'KReclaimable: 184324 kB' 'Slab: 459584 kB' 'SReclaimable: 184324 kB' 'SUnreclaim: 275260 kB' 'KernelStack: 9984 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12633884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189808 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.577 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.577 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.578 14:07:47 -- setup/common.sh@33 -- # echo 1536 00:03:06.578 14:07:47 -- setup/common.sh@33 -- # return 0 00:03:06.578 14:07:47 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:06.578 14:07:47 -- setup/hugepages.sh@112 -- # get_nodes 00:03:06.578 14:07:47 -- setup/hugepages.sh@27 -- # local node 00:03:06.578 14:07:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.578 14:07:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:06.578 14:07:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.578 14:07:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:06.578 14:07:47 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:06.578 14:07:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:06.578 14:07:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:06.578 14:07:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:06.578 14:07:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:06.578 14:07:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.578 14:07:47 -- setup/common.sh@18 -- # local node=0 00:03:06.578 14:07:47 -- setup/common.sh@19 -- # local var val 00:03:06.578 14:07:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.578 14:07:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.578 14:07:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:06.578 14:07:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:06.578 14:07:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.578 14:07:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 18443104 kB' 'MemUsed: 14391588 kB' 'SwapCached: 0 kB' 'Active: 9169628 kB' 'Inactive: 3414024 kB' 'Active(anon): 8231584 kB' 'Inactive(anon): 0 kB' 'Active(file): 938044 kB' 'Inactive(file): 3414024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12152924 kB' 'Mapped: 133496 kB' 'AnonPages: 433832 kB' 'Shmem: 7800856 kB' 'KernelStack: 6088 kB' 'PageTables: 4916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121756 kB' 'Slab: 274280 kB' 'SReclaimable: 121756 kB' 'SUnreclaim: 152524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.578 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.578 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:47 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@33 -- # echo 0 00:03:06.579 14:07:48 -- setup/common.sh@33 -- # return 0 00:03:06.579 14:07:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:06.579 14:07:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:06.579 14:07:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:06.579 14:07:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:06.579 14:07:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.579 14:07:48 -- setup/common.sh@18 -- # local node=1 00:03:06.579 14:07:48 -- setup/common.sh@19 -- # local var val 00:03:06.579 14:07:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.579 14:07:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.579 14:07:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:06.579 14:07:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:06.579 14:07:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.579 14:07:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19456488 kB' 'MemFree: 12829132 kB' 'MemUsed: 6627356 kB' 'SwapCached: 0 kB' 'Active: 3629768 kB' 'Inactive: 138680 kB' 'Active(anon): 3372600 kB' 'Inactive(anon): 0 kB' 'Active(file): 257168 kB' 'Inactive(file): 138680 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3566796 kB' 'Mapped: 86120 kB' 'AnonPages: 201680 kB' 'Shmem: 3170948 kB' 'KernelStack: 3896 kB' 'PageTables: 3596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 62568 kB' 'Slab: 185304 kB' 'SReclaimable: 62568 kB' 'SUnreclaim: 122736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.579 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.579 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # continue 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.580 14:07:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.580 14:07:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.580 14:07:48 -- setup/common.sh@33 -- # echo 0 00:03:06.580 14:07:48 -- setup/common.sh@33 -- # return 0 00:03:06.580 14:07:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:06.580 14:07:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:06.580 14:07:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:06.580 14:07:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:06.580 14:07:48 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:06.580 node0=512 expecting 512 00:03:06.580 14:07:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:06.580 14:07:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:06.580 14:07:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:06.580 14:07:48 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:06.580 node1=1024 expecting 1024 00:03:06.580 14:07:48 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:06.580 00:03:06.580 real 0m1.146s 00:03:06.580 user 0m0.517s 00:03:06.580 sys 0m0.657s 00:03:06.580 14:07:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:06.580 14:07:48 -- common/autotest_common.sh@10 -- # set +x 00:03:06.580 ************************************ 00:03:06.580 END TEST custom_alloc 00:03:06.580 ************************************ 00:03:06.580 14:07:48 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:06.580 14:07:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:06.580 14:07:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:06.580 14:07:48 -- common/autotest_common.sh@10 -- # set +x 00:03:06.839 ************************************ 00:03:06.839 START TEST no_shrink_alloc 00:03:06.839 ************************************ 00:03:06.839 14:07:48 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:03:06.839 14:07:48 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:06.839 14:07:48 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:06.839 14:07:48 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:06.839 14:07:48 -- setup/hugepages.sh@51 -- # shift 00:03:06.839 14:07:48 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:06.839 14:07:48 -- setup/hugepages.sh@52 -- # local node_ids 00:03:06.839 14:07:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:06.839 14:07:48 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:06.839 14:07:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:06.839 14:07:48 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:06.839 14:07:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.839 14:07:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:06.839 14:07:48 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.839 14:07:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.839 14:07:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.839 14:07:48 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:06.839 14:07:48 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:06.839 14:07:48 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:06.839 14:07:48 -- setup/hugepages.sh@73 -- # return 0 00:03:06.839 14:07:48 -- setup/hugepages.sh@198 -- # setup output 00:03:06.839 14:07:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.839 14:07:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:07.777 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:07.777 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:07.777 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:07.777 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:07.777 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:07.777 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:07.777 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:07.777 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:07.777 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:07.777 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:07.777 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:07.777 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:07.777 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:07.777 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:07.777 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:07.777 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:07.777 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:07.777 14:07:49 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:07.777 14:07:49 -- setup/hugepages.sh@89 -- # local node 00:03:07.777 14:07:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:07.777 14:07:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:07.777 14:07:49 -- setup/hugepages.sh@92 -- # local surp 00:03:07.777 14:07:49 -- setup/hugepages.sh@93 -- # local resv 00:03:07.777 14:07:49 -- setup/hugepages.sh@94 -- # local anon 00:03:07.777 14:07:49 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:07.777 14:07:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:07.777 14:07:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:07.777 14:07:49 -- setup/common.sh@18 -- # local node= 00:03:07.777 14:07:49 -- setup/common.sh@19 -- # local var val 00:03:07.777 14:07:49 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.777 14:07:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.777 14:07:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.777 14:07:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.777 14:07:49 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.777 14:07:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32181576 kB' 'MemAvailable: 36552772 kB' 'Buffers: 2696 kB' 'Cached: 15717068 kB' 'SwapCached: 0 kB' 'Active: 12799648 kB' 'Inactive: 3552704 kB' 'Active(anon): 11604436 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635676 kB' 'Mapped: 219656 kB' 'Shmem: 10971848 kB' 'KReclaimable: 184324 kB' 'Slab: 459692 kB' 'SReclaimable: 184324 kB' 'SUnreclaim: 275368 kB' 'KernelStack: 9968 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12633764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189904 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.777 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.777 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.778 14:07:49 -- setup/common.sh@33 -- # echo 0 00:03:07.778 14:07:49 -- setup/common.sh@33 -- # return 0 00:03:07.778 14:07:49 -- setup/hugepages.sh@97 -- # anon=0 00:03:07.778 14:07:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:07.778 14:07:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.778 14:07:49 -- setup/common.sh@18 -- # local node= 00:03:07.778 14:07:49 -- setup/common.sh@19 -- # local var val 00:03:07.778 14:07:49 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.778 14:07:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.778 14:07:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.778 14:07:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.778 14:07:49 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.778 14:07:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32181576 kB' 'MemAvailable: 36552772 kB' 'Buffers: 2696 kB' 'Cached: 15717072 kB' 'SwapCached: 0 kB' 'Active: 12800240 kB' 'Inactive: 3552704 kB' 'Active(anon): 11605028 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636296 kB' 'Mapped: 219540 kB' 'Shmem: 10971852 kB' 'KReclaimable: 184324 kB' 'Slab: 459688 kB' 'SReclaimable: 184324 kB' 'SUnreclaim: 275364 kB' 'KernelStack: 9952 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12633540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189840 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.778 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.778 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.779 14:07:49 -- setup/common.sh@33 -- # echo 0 00:03:07.779 14:07:49 -- setup/common.sh@33 -- # return 0 00:03:07.779 14:07:49 -- setup/hugepages.sh@99 -- # surp=0 00:03:07.779 14:07:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:07.779 14:07:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:07.779 14:07:49 -- setup/common.sh@18 -- # local node= 00:03:07.779 14:07:49 -- setup/common.sh@19 -- # local var val 00:03:07.779 14:07:49 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.779 14:07:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.779 14:07:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.779 14:07:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.779 14:07:49 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.779 14:07:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32181576 kB' 'MemAvailable: 36552772 kB' 'Buffers: 2696 kB' 'Cached: 15717076 kB' 'SwapCached: 0 kB' 'Active: 12799016 kB' 'Inactive: 3552704 kB' 'Active(anon): 11603804 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635088 kB' 'Mapped: 219652 kB' 'Shmem: 10971856 kB' 'KReclaimable: 184324 kB' 'Slab: 459684 kB' 'SReclaimable: 184324 kB' 'SUnreclaim: 275360 kB' 'KernelStack: 9936 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12633556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189840 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.779 14:07:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.779 14:07:49 -- setup/common.sh@33 -- # echo 0 00:03:07.779 14:07:49 -- setup/common.sh@33 -- # return 0 00:03:07.779 14:07:49 -- setup/hugepages.sh@100 -- # resv=0 00:03:07.779 14:07:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:07.779 nr_hugepages=1024 00:03:07.779 14:07:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:07.779 resv_hugepages=0 00:03:07.779 14:07:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:07.779 surplus_hugepages=0 00:03:07.779 14:07:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:07.779 anon_hugepages=0 00:03:07.779 14:07:49 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.779 14:07:49 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:07.779 14:07:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:07.779 14:07:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:07.779 14:07:49 -- setup/common.sh@18 -- # local node= 00:03:07.779 14:07:49 -- setup/common.sh@19 -- # local var val 00:03:07.779 14:07:49 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.779 14:07:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.779 14:07:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.779 14:07:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.779 14:07:49 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.779 14:07:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.779 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32181888 kB' 'MemAvailable: 36553084 kB' 'Buffers: 2696 kB' 'Cached: 15717080 kB' 'SwapCached: 0 kB' 'Active: 12799340 kB' 'Inactive: 3552704 kB' 'Active(anon): 11604128 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635444 kB' 'Mapped: 219652 kB' 'Shmem: 10971860 kB' 'KReclaimable: 184324 kB' 'Slab: 459684 kB' 'SReclaimable: 184324 kB' 'SUnreclaim: 275360 kB' 'KernelStack: 9984 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12633936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189856 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.780 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.780 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.781 14:07:49 -- setup/common.sh@33 -- # echo 1024 00:03:07.781 14:07:49 -- setup/common.sh@33 -- # return 0 00:03:07.781 14:07:49 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.781 14:07:49 -- setup/hugepages.sh@112 -- # get_nodes 00:03:07.781 14:07:49 -- setup/hugepages.sh@27 -- # local node 00:03:07.781 14:07:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.781 14:07:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:07.781 14:07:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.781 14:07:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:07.781 14:07:49 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:07.781 14:07:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:07.781 14:07:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.781 14:07:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.781 14:07:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:07.781 14:07:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.781 14:07:49 -- setup/common.sh@18 -- # local node=0 00:03:07.781 14:07:49 -- setup/common.sh@19 -- # local var val 00:03:07.781 14:07:49 -- setup/common.sh@20 -- # local mem_f mem 00:03:07.781 14:07:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.781 14:07:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:07.781 14:07:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:07.781 14:07:49 -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.781 14:07:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 17378004 kB' 'MemUsed: 15456688 kB' 'SwapCached: 0 kB' 'Active: 9170540 kB' 'Inactive: 3414024 kB' 'Active(anon): 8232496 kB' 'Inactive(anon): 0 kB' 'Active(file): 938044 kB' 'Inactive(file): 3414024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12153012 kB' 'Mapped: 133532 kB' 'AnonPages: 434680 kB' 'Shmem: 7800944 kB' 'KernelStack: 6120 kB' 'PageTables: 5024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121756 kB' 'Slab: 274408 kB' 'SReclaimable: 121756 kB' 'SUnreclaim: 152652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # continue 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:07.781 14:07:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:07.781 14:07:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.781 14:07:49 -- setup/common.sh@33 -- # echo 0 00:03:07.781 14:07:49 -- setup/common.sh@33 -- # return 0 00:03:07.781 14:07:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.781 14:07:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.781 14:07:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.781 14:07:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.781 14:07:49 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:07.781 node0=1024 expecting 1024 00:03:07.781 14:07:49 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:07.782 14:07:49 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:07.782 14:07:49 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:07.782 14:07:49 -- setup/hugepages.sh@202 -- # setup output 00:03:07.782 14:07:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.782 14:07:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:08.718 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:08.718 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:08.718 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:08.718 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:08.718 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:08.718 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:08.718 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:08.718 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:08.718 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:08.718 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:03:08.718 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:03:08.718 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:03:08.718 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:03:08.718 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:03:08.718 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:03:08.718 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:03:08.718 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:03:08.718 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:08.980 14:07:50 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:08.980 14:07:50 -- setup/hugepages.sh@89 -- # local node 00:03:08.980 14:07:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:08.980 14:07:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:08.980 14:07:50 -- setup/hugepages.sh@92 -- # local surp 00:03:08.980 14:07:50 -- setup/hugepages.sh@93 -- # local resv 00:03:08.980 14:07:50 -- setup/hugepages.sh@94 -- # local anon 00:03:08.980 14:07:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:08.980 14:07:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:08.981 14:07:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:08.981 14:07:50 -- setup/common.sh@18 -- # local node= 00:03:08.981 14:07:50 -- setup/common.sh@19 -- # local var val 00:03:08.981 14:07:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.981 14:07:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.981 14:07:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.981 14:07:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.981 14:07:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.981 14:07:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32158132 kB' 'MemAvailable: 36529312 kB' 'Buffers: 2696 kB' 'Cached: 15717144 kB' 'SwapCached: 0 kB' 'Active: 12799856 kB' 'Inactive: 3552704 kB' 'Active(anon): 11604644 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635928 kB' 'Mapped: 219660 kB' 'Shmem: 10971924 kB' 'KReclaimable: 184292 kB' 'Slab: 459660 kB' 'SReclaimable: 184292 kB' 'SUnreclaim: 275368 kB' 'KernelStack: 9984 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12634136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189936 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.981 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.981 14:07:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.982 14:07:50 -- setup/common.sh@33 -- # echo 0 00:03:08.982 14:07:50 -- setup/common.sh@33 -- # return 0 00:03:08.982 14:07:50 -- setup/hugepages.sh@97 -- # anon=0 00:03:08.982 14:07:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:08.982 14:07:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.982 14:07:50 -- setup/common.sh@18 -- # local node= 00:03:08.982 14:07:50 -- setup/common.sh@19 -- # local var val 00:03:08.982 14:07:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.982 14:07:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.982 14:07:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.982 14:07:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.982 14:07:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.982 14:07:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32157880 kB' 'MemAvailable: 36529060 kB' 'Buffers: 2696 kB' 'Cached: 15717148 kB' 'SwapCached: 0 kB' 'Active: 12800200 kB' 'Inactive: 3552704 kB' 'Active(anon): 11604988 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636316 kB' 'Mapped: 219660 kB' 'Shmem: 10971928 kB' 'KReclaimable: 184292 kB' 'Slab: 459648 kB' 'SReclaimable: 184292 kB' 'SUnreclaim: 275356 kB' 'KernelStack: 9952 kB' 'PageTables: 8524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12634148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189888 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.982 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.982 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.983 14:07:50 -- setup/common.sh@33 -- # echo 0 00:03:08.983 14:07:50 -- setup/common.sh@33 -- # return 0 00:03:08.983 14:07:50 -- setup/hugepages.sh@99 -- # surp=0 00:03:08.983 14:07:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:08.983 14:07:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:08.983 14:07:50 -- setup/common.sh@18 -- # local node= 00:03:08.983 14:07:50 -- setup/common.sh@19 -- # local var val 00:03:08.983 14:07:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.983 14:07:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.983 14:07:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.983 14:07:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.983 14:07:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.983 14:07:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32157880 kB' 'MemAvailable: 36529060 kB' 'Buffers: 2696 kB' 'Cached: 15717152 kB' 'SwapCached: 0 kB' 'Active: 12799568 kB' 'Inactive: 3552704 kB' 'Active(anon): 11604356 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635676 kB' 'Mapped: 219656 kB' 'Shmem: 10971932 kB' 'KReclaimable: 184292 kB' 'Slab: 459680 kB' 'SReclaimable: 184292 kB' 'SUnreclaim: 275388 kB' 'KernelStack: 9952 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12634164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189888 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.983 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.983 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.984 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.984 14:07:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.985 14:07:50 -- setup/common.sh@33 -- # echo 0 00:03:08.985 14:07:50 -- setup/common.sh@33 -- # return 0 00:03:08.985 14:07:50 -- setup/hugepages.sh@100 -- # resv=0 00:03:08.985 14:07:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:08.985 nr_hugepages=1024 00:03:08.985 14:07:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:08.985 resv_hugepages=0 00:03:08.985 14:07:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:08.985 surplus_hugepages=0 00:03:08.985 14:07:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:08.985 anon_hugepages=0 00:03:08.985 14:07:50 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.985 14:07:50 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:08.985 14:07:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:08.985 14:07:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:08.985 14:07:50 -- setup/common.sh@18 -- # local node= 00:03:08.985 14:07:50 -- setup/common.sh@19 -- # local var val 00:03:08.985 14:07:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.985 14:07:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.985 14:07:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.985 14:07:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.985 14:07:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.985 14:07:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32157644 kB' 'MemAvailable: 36528824 kB' 'Buffers: 2696 kB' 'Cached: 15717172 kB' 'SwapCached: 0 kB' 'Active: 12799944 kB' 'Inactive: 3552704 kB' 'Active(anon): 11604732 kB' 'Inactive(anon): 0 kB' 'Active(file): 1195212 kB' 'Inactive(file): 3552704 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636012 kB' 'Mapped: 219656 kB' 'Shmem: 10971952 kB' 'KReclaimable: 184292 kB' 'Slab: 459684 kB' 'SReclaimable: 184292 kB' 'SUnreclaim: 275392 kB' 'KernelStack: 9984 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12634176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 189888 kB' 'VmallocChunk: 0 kB' 'Percpu: 22400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2861348 kB' 'DirectMap2M: 39004160 kB' 'DirectMap1G: 18874368 kB' 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.985 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.985 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.986 14:07:50 -- setup/common.sh@33 -- # echo 1024 00:03:08.986 14:07:50 -- setup/common.sh@33 -- # return 0 00:03:08.986 14:07:50 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.986 14:07:50 -- setup/hugepages.sh@112 -- # get_nodes 00:03:08.986 14:07:50 -- setup/hugepages.sh@27 -- # local node 00:03:08.986 14:07:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.986 14:07:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:08.986 14:07:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.986 14:07:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:08.986 14:07:50 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:08.986 14:07:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.986 14:07:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.986 14:07:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.986 14:07:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:08.986 14:07:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.986 14:07:50 -- setup/common.sh@18 -- # local node=0 00:03:08.986 14:07:50 -- setup/common.sh@19 -- # local var val 00:03:08.986 14:07:50 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.986 14:07:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.986 14:07:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:08.986 14:07:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:08.986 14:07:50 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.986 14:07:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32834692 kB' 'MemFree: 17375824 kB' 'MemUsed: 15458868 kB' 'SwapCached: 0 kB' 'Active: 9172020 kB' 'Inactive: 3414024 kB' 'Active(anon): 8233976 kB' 'Inactive(anon): 0 kB' 'Active(file): 938044 kB' 'Inactive(file): 3414024 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12153084 kB' 'Mapped: 133536 kB' 'AnonPages: 436152 kB' 'Shmem: 7801016 kB' 'KernelStack: 6088 kB' 'PageTables: 4968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 121756 kB' 'Slab: 274400 kB' 'SReclaimable: 121756 kB' 'SUnreclaim: 152644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.986 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.986 14:07:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # continue 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.987 14:07:50 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.987 14:07:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.987 14:07:50 -- setup/common.sh@33 -- # echo 0 00:03:08.987 14:07:50 -- setup/common.sh@33 -- # return 0 00:03:08.987 14:07:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.987 14:07:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.987 14:07:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.987 14:07:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.987 14:07:50 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:08.987 node0=1024 expecting 1024 00:03:08.987 14:07:50 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:08.987 00:03:08.987 real 0m2.285s 00:03:08.987 user 0m1.008s 00:03:08.987 sys 0m1.337s 00:03:08.987 14:07:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:08.987 14:07:50 -- common/autotest_common.sh@10 -- # set +x 00:03:08.987 ************************************ 00:03:08.987 END TEST no_shrink_alloc 00:03:08.987 ************************************ 00:03:08.987 14:07:50 -- setup/hugepages.sh@217 -- # clear_hp 00:03:08.987 14:07:50 -- setup/hugepages.sh@37 -- # local node hp 00:03:08.987 14:07:50 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:08.987 14:07:50 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.987 14:07:50 -- setup/hugepages.sh@41 -- # echo 0 00:03:08.987 14:07:50 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.987 14:07:50 -- setup/hugepages.sh@41 -- # echo 0 00:03:08.987 14:07:50 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:08.987 14:07:50 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.987 14:07:50 -- setup/hugepages.sh@41 -- # echo 0 00:03:08.987 14:07:50 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.987 14:07:50 -- setup/hugepages.sh@41 -- # echo 0 00:03:08.987 14:07:50 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:08.987 14:07:50 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:08.987 00:03:08.987 real 0m9.948s 00:03:08.987 user 0m3.959s 00:03:08.987 sys 0m5.096s 00:03:08.987 14:07:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:08.987 14:07:50 -- common/autotest_common.sh@10 -- # set +x 00:03:08.987 ************************************ 00:03:08.987 END TEST hugepages 00:03:08.987 ************************************ 00:03:08.987 14:07:50 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:08.987 14:07:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:08.987 14:07:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:08.987 14:07:50 -- common/autotest_common.sh@10 -- # set +x 00:03:09.246 ************************************ 00:03:09.246 START TEST driver 00:03:09.246 ************************************ 00:03:09.246 14:07:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:09.246 * Looking for test storage... 00:03:09.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:09.246 14:07:50 -- setup/driver.sh@68 -- # setup reset 00:03:09.246 14:07:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:09.246 14:07:50 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.220 14:07:52 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:11.220 14:07:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:11.220 14:07:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:11.220 14:07:52 -- common/autotest_common.sh@10 -- # set +x 00:03:11.480 ************************************ 00:03:11.480 START TEST guess_driver 00:03:11.480 ************************************ 00:03:11.480 14:07:52 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:11.480 14:07:52 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:11.480 14:07:52 -- setup/driver.sh@47 -- # local fail=0 00:03:11.480 14:07:52 -- setup/driver.sh@49 -- # pick_driver 00:03:11.480 14:07:52 -- setup/driver.sh@36 -- # vfio 00:03:11.480 14:07:52 -- setup/driver.sh@21 -- # local iommu_grups 00:03:11.480 14:07:52 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:11.480 14:07:52 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:11.480 14:07:52 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:11.480 14:07:52 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:11.480 14:07:52 -- setup/driver.sh@29 -- # (( 102 > 0 )) 00:03:11.480 14:07:52 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:11.480 14:07:52 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:11.480 14:07:52 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:11.480 14:07:52 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:11.480 14:07:52 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:11.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:11.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:11.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:11.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:11.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:11.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:11.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:11.480 14:07:52 -- setup/driver.sh@30 -- # return 0 00:03:11.480 14:07:52 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:11.480 14:07:52 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:11.480 14:07:52 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:11.480 14:07:52 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:11.480 Looking for driver=vfio-pci 00:03:11.480 14:07:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.480 14:07:52 -- setup/driver.sh@45 -- # setup output config 00:03:11.480 14:07:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.480 14:07:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:12.414 14:07:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.414 14:07:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.414 14:07:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.414 14:07:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.414 14:07:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.414 14:07:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.414 14:07:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.415 14:07:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.415 14:07:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.415 14:07:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.415 14:07:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.415 14:07:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.415 14:07:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.415 14:07:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.415 14:07:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.415 14:07:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.415 14:07:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.415 14:07:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.415 14:07:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.415 14:07:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.415 14:07:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.415 14:07:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.415 14:07:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.415 14:07:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.415 14:07:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.415 14:07:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.415 14:07:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.415 14:07:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.415 14:07:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.415 14:07:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.415 14:07:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.415 14:07:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.415 14:07:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.415 14:07:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.415 14:07:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.415 14:07:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.415 14:07:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.415 14:07:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.415 14:07:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.415 14:07:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.415 14:07:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.415 14:07:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.415 14:07:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.415 14:07:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.415 14:07:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.415 14:07:53 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.415 14:07:53 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.415 14:07:53 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.352 14:07:54 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.352 14:07:54 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.352 14:07:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.352 14:07:54 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:13.352 14:07:54 -- setup/driver.sh@65 -- # setup reset 00:03:13.352 14:07:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:13.352 14:07:54 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.888 00:03:15.888 real 0m4.099s 00:03:15.888 user 0m0.898s 00:03:15.888 sys 0m1.438s 00:03:15.888 14:07:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:15.888 14:07:56 -- common/autotest_common.sh@10 -- # set +x 00:03:15.888 ************************************ 00:03:15.888 END TEST guess_driver 00:03:15.888 ************************************ 00:03:15.888 00:03:15.888 real 0m6.384s 00:03:15.888 user 0m1.423s 00:03:15.888 sys 0m2.373s 00:03:15.888 14:07:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:15.888 14:07:57 -- common/autotest_common.sh@10 -- # set +x 00:03:15.888 ************************************ 00:03:15.888 END TEST driver 00:03:15.888 ************************************ 00:03:15.888 14:07:57 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:15.888 14:07:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:15.888 14:07:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:15.888 14:07:57 -- common/autotest_common.sh@10 -- # set +x 00:03:15.888 ************************************ 00:03:15.888 START TEST devices 00:03:15.888 ************************************ 00:03:15.888 14:07:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:15.888 * Looking for test storage... 00:03:15.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:15.888 14:07:57 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:15.888 14:07:57 -- setup/devices.sh@192 -- # setup reset 00:03:15.888 14:07:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:15.888 14:07:57 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:16.828 14:07:58 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:16.828 14:07:58 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:16.828 14:07:58 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:16.828 14:07:58 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:16.828 14:07:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:16.828 14:07:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:16.828 14:07:58 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:16.828 14:07:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:16.828 14:07:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:16.828 14:07:58 -- setup/devices.sh@196 -- # blocks=() 00:03:16.828 14:07:58 -- setup/devices.sh@196 -- # declare -a blocks 00:03:16.828 14:07:58 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:16.828 14:07:58 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:16.828 14:07:58 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:16.828 14:07:58 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:16.828 14:07:58 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:16.828 14:07:58 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:16.828 14:07:58 -- setup/devices.sh@202 -- # pci=0000:84:00.0 00:03:16.828 14:07:58 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:03:16.828 14:07:58 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:16.828 14:07:58 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:16.829 14:07:58 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:17.090 No valid GPT data, bailing 00:03:17.090 14:07:58 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:17.090 14:07:58 -- scripts/common.sh@391 -- # pt= 00:03:17.090 14:07:58 -- scripts/common.sh@392 -- # return 1 00:03:17.090 14:07:58 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:17.090 14:07:58 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:17.090 14:07:58 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:17.090 14:07:58 -- setup/common.sh@80 -- # echo 1000204886016 00:03:17.090 14:07:58 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:17.090 14:07:58 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:17.090 14:07:58 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:84:00.0 00:03:17.090 14:07:58 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:17.090 14:07:58 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:17.090 14:07:58 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:17.090 14:07:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:17.090 14:07:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:17.090 14:07:58 -- common/autotest_common.sh@10 -- # set +x 00:03:17.090 ************************************ 00:03:17.090 START TEST nvme_mount 00:03:17.090 ************************************ 00:03:17.090 14:07:58 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:17.090 14:07:58 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:17.090 14:07:58 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:17.090 14:07:58 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:17.090 14:07:58 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:17.090 14:07:58 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:17.090 14:07:58 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:17.090 14:07:58 -- setup/common.sh@40 -- # local part_no=1 00:03:17.090 14:07:58 -- setup/common.sh@41 -- # local size=1073741824 00:03:17.090 14:07:58 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:17.090 14:07:58 -- setup/common.sh@44 -- # parts=() 00:03:17.090 14:07:58 -- setup/common.sh@44 -- # local parts 00:03:17.090 14:07:58 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:17.090 14:07:58 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:17.090 14:07:58 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:17.090 14:07:58 -- setup/common.sh@46 -- # (( part++ )) 00:03:17.090 14:07:58 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:17.090 14:07:58 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:17.090 14:07:58 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:17.090 14:07:58 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:18.035 Creating new GPT entries in memory. 00:03:18.035 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:18.035 other utilities. 00:03:18.035 14:07:59 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:18.035 14:07:59 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:18.035 14:07:59 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:18.035 14:07:59 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:18.035 14:07:59 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:19.413 Creating new GPT entries in memory. 00:03:19.413 The operation has completed successfully. 00:03:19.413 14:08:00 -- setup/common.sh@57 -- # (( part++ )) 00:03:19.413 14:08:00 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:19.413 14:08:00 -- setup/common.sh@62 -- # wait 3050487 00:03:19.413 14:08:00 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.413 14:08:00 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:19.413 14:08:00 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.413 14:08:00 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:19.413 14:08:00 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:19.413 14:08:00 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.413 14:08:00 -- setup/devices.sh@105 -- # verify 0000:84:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:19.413 14:08:00 -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:19.413 14:08:00 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:19.413 14:08:00 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.413 14:08:00 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:19.413 14:08:00 -- setup/devices.sh@53 -- # local found=0 00:03:19.413 14:08:00 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:19.413 14:08:00 -- setup/devices.sh@56 -- # : 00:03:19.413 14:08:00 -- setup/devices.sh@59 -- # local pci status 00:03:19.413 14:08:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.413 14:08:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:19.413 14:08:00 -- setup/devices.sh@47 -- # setup output config 00:03:19.413 14:08:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.413 14:08:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:19.981 14:08:01 -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:19.981 14:08:01 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:19.981 14:08:01 -- setup/devices.sh@63 -- # found=1 00:03:19.981 14:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.981 14:08:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:19.981 14:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.981 14:08:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:19.981 14:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.981 14:08:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:19.981 14:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.981 14:08:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:19.981 14:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.981 14:08:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:19.981 14:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.981 14:08:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:19.981 14:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.981 14:08:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:19.981 14:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.981 14:08:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:19.981 14:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.982 14:08:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:19.982 14:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.982 14:08:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:19.982 14:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.982 14:08:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:19.982 14:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.982 14:08:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:19.982 14:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.982 14:08:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:19.982 14:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.982 14:08:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:19.982 14:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.982 14:08:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:19.982 14:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.982 14:08:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:19.982 14:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.241 14:08:01 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:20.241 14:08:01 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:20.241 14:08:01 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.241 14:08:01 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:20.241 14:08:01 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:20.241 14:08:01 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:20.241 14:08:01 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.241 14:08:01 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.241 14:08:01 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:20.241 14:08:01 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:20.241 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:20.241 14:08:01 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:20.241 14:08:01 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:20.500 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:20.500 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:20.500 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:20.500 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:20.500 14:08:01 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:20.500 14:08:01 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:20.500 14:08:01 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.500 14:08:01 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:20.500 14:08:01 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:20.500 14:08:01 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.500 14:08:01 -- setup/devices.sh@116 -- # verify 0000:84:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:20.500 14:08:01 -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:20.500 14:08:01 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:20.500 14:08:01 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.500 14:08:01 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:20.500 14:08:01 -- setup/devices.sh@53 -- # local found=0 00:03:20.500 14:08:01 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:20.500 14:08:01 -- setup/devices.sh@56 -- # : 00:03:20.500 14:08:01 -- setup/devices.sh@59 -- # local pci status 00:03:20.500 14:08:01 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:20.500 14:08:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.500 14:08:01 -- setup/devices.sh@47 -- # setup output config 00:03:20.500 14:08:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.500 14:08:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:21.437 14:08:02 -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:21.437 14:08:02 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:21.437 14:08:02 -- setup/devices.sh@63 -- # found=1 00:03:21.437 14:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.437 14:08:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:21.437 14:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.437 14:08:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:21.437 14:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.437 14:08:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:21.437 14:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.437 14:08:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:21.437 14:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.437 14:08:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:21.437 14:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.437 14:08:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:21.437 14:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.437 14:08:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:21.437 14:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.437 14:08:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:21.437 14:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.437 14:08:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:21.437 14:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.437 14:08:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:21.437 14:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.437 14:08:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:21.437 14:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.437 14:08:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:21.437 14:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.437 14:08:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:21.437 14:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.437 14:08:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:21.437 14:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.437 14:08:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:21.437 14:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.437 14:08:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:21.438 14:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.438 14:08:02 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:21.438 14:08:02 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:21.438 14:08:02 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:21.438 14:08:02 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:21.438 14:08:02 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:21.438 14:08:02 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:21.438 14:08:02 -- setup/devices.sh@125 -- # verify 0000:84:00.0 data@nvme0n1 '' '' 00:03:21.438 14:08:02 -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:21.438 14:08:02 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:21.438 14:08:02 -- setup/devices.sh@50 -- # local mount_point= 00:03:21.438 14:08:02 -- setup/devices.sh@51 -- # local test_file= 00:03:21.438 14:08:02 -- setup/devices.sh@53 -- # local found=0 00:03:21.438 14:08:02 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:21.438 14:08:02 -- setup/devices.sh@59 -- # local pci status 00:03:21.438 14:08:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.438 14:08:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:21.438 14:08:02 -- setup/devices.sh@47 -- # setup output config 00:03:21.438 14:08:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.438 14:08:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:22.373 14:08:03 -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:22.373 14:08:03 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:22.373 14:08:03 -- setup/devices.sh@63 -- # found=1 00:03:22.373 14:08:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.373 14:08:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:22.373 14:08:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.373 14:08:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:22.373 14:08:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.373 14:08:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:22.373 14:08:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.373 14:08:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:22.373 14:08:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.373 14:08:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:22.373 14:08:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.373 14:08:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:22.373 14:08:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.373 14:08:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:22.373 14:08:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.374 14:08:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:22.374 14:08:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.374 14:08:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:22.374 14:08:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.374 14:08:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:22.374 14:08:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.374 14:08:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:22.374 14:08:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.374 14:08:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:22.374 14:08:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.374 14:08:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:22.374 14:08:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.374 14:08:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:22.374 14:08:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.374 14:08:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:22.374 14:08:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.374 14:08:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:22.374 14:08:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.633 14:08:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:22.634 14:08:03 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:22.634 14:08:03 -- setup/devices.sh@68 -- # return 0 00:03:22.634 14:08:03 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:22.634 14:08:03 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.634 14:08:04 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:22.634 14:08:04 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:22.634 14:08:04 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:22.634 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:22.634 00:03:22.634 real 0m5.435s 00:03:22.634 user 0m1.264s 00:03:22.634 sys 0m1.911s 00:03:22.634 14:08:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:22.634 14:08:04 -- common/autotest_common.sh@10 -- # set +x 00:03:22.634 ************************************ 00:03:22.634 END TEST nvme_mount 00:03:22.634 ************************************ 00:03:22.634 14:08:04 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:22.634 14:08:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:22.634 14:08:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:22.634 14:08:04 -- common/autotest_common.sh@10 -- # set +x 00:03:22.634 ************************************ 00:03:22.634 START TEST dm_mount 00:03:22.634 ************************************ 00:03:22.634 14:08:04 -- common/autotest_common.sh@1111 -- # dm_mount 00:03:22.634 14:08:04 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:22.634 14:08:04 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:22.634 14:08:04 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:22.634 14:08:04 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:22.634 14:08:04 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:22.634 14:08:04 -- setup/common.sh@40 -- # local part_no=2 00:03:22.634 14:08:04 -- setup/common.sh@41 -- # local size=1073741824 00:03:22.634 14:08:04 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:22.634 14:08:04 -- setup/common.sh@44 -- # parts=() 00:03:22.634 14:08:04 -- setup/common.sh@44 -- # local parts 00:03:22.634 14:08:04 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:22.634 14:08:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:22.634 14:08:04 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:22.634 14:08:04 -- setup/common.sh@46 -- # (( part++ )) 00:03:22.634 14:08:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:22.634 14:08:04 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:22.634 14:08:04 -- setup/common.sh@46 -- # (( part++ )) 00:03:22.634 14:08:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:22.634 14:08:04 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:22.634 14:08:04 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:22.634 14:08:04 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:24.014 Creating new GPT entries in memory. 00:03:24.014 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:24.014 other utilities. 00:03:24.014 14:08:05 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:24.014 14:08:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:24.014 14:08:05 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:24.014 14:08:05 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:24.014 14:08:05 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:24.953 Creating new GPT entries in memory. 00:03:24.953 The operation has completed successfully. 00:03:24.953 14:08:06 -- setup/common.sh@57 -- # (( part++ )) 00:03:24.953 14:08:06 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:24.953 14:08:06 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:24.953 14:08:06 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:24.953 14:08:06 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:25.892 The operation has completed successfully. 00:03:25.892 14:08:07 -- setup/common.sh@57 -- # (( part++ )) 00:03:25.892 14:08:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:25.892 14:08:07 -- setup/common.sh@62 -- # wait 3052269 00:03:25.892 14:08:07 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:25.892 14:08:07 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:25.892 14:08:07 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:25.892 14:08:07 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:25.892 14:08:07 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:25.892 14:08:07 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:25.892 14:08:07 -- setup/devices.sh@161 -- # break 00:03:25.892 14:08:07 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:25.892 14:08:07 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:25.892 14:08:07 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:25.892 14:08:07 -- setup/devices.sh@166 -- # dm=dm-0 00:03:25.892 14:08:07 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:25.892 14:08:07 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:25.892 14:08:07 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:25.892 14:08:07 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:25.892 14:08:07 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:25.892 14:08:07 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:25.892 14:08:07 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:25.892 14:08:07 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:25.892 14:08:07 -- setup/devices.sh@174 -- # verify 0000:84:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:25.892 14:08:07 -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:25.892 14:08:07 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:25.892 14:08:07 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:25.892 14:08:07 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:25.892 14:08:07 -- setup/devices.sh@53 -- # local found=0 00:03:25.892 14:08:07 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:25.892 14:08:07 -- setup/devices.sh@56 -- # : 00:03:25.892 14:08:07 -- setup/devices.sh@59 -- # local pci status 00:03:25.892 14:08:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.892 14:08:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:25.892 14:08:07 -- setup/devices.sh@47 -- # setup output config 00:03:25.892 14:08:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.892 14:08:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:26.831 14:08:08 -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:26.831 14:08:08 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:26.831 14:08:08 -- setup/devices.sh@63 -- # found=1 00:03:26.831 14:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.831 14:08:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:26.831 14:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.831 14:08:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:26.831 14:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.831 14:08:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:26.831 14:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.831 14:08:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:26.831 14:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.831 14:08:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:26.831 14:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.831 14:08:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:26.831 14:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.831 14:08:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:26.831 14:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.831 14:08:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:26.831 14:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.831 14:08:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:26.831 14:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.831 14:08:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:26.831 14:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.831 14:08:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:26.831 14:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.831 14:08:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:26.831 14:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.831 14:08:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:26.831 14:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.831 14:08:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:26.831 14:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.831 14:08:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:26.831 14:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.831 14:08:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:26.831 14:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.831 14:08:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:26.831 14:08:08 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:26.831 14:08:08 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:26.831 14:08:08 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:26.831 14:08:08 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:26.831 14:08:08 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:26.831 14:08:08 -- setup/devices.sh@184 -- # verify 0000:84:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:26.831 14:08:08 -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:26.831 14:08:08 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:26.831 14:08:08 -- setup/devices.sh@50 -- # local mount_point= 00:03:26.831 14:08:08 -- setup/devices.sh@51 -- # local test_file= 00:03:26.831 14:08:08 -- setup/devices.sh@53 -- # local found=0 00:03:26.831 14:08:08 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:26.831 14:08:08 -- setup/devices.sh@59 -- # local pci status 00:03:26.831 14:08:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.831 14:08:08 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:26.831 14:08:08 -- setup/devices.sh@47 -- # setup output config 00:03:26.831 14:08:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.831 14:08:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:27.768 14:08:09 -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:27.768 14:08:09 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:27.768 14:08:09 -- setup/devices.sh@63 -- # found=1 00:03:27.768 14:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.768 14:08:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:27.769 14:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.769 14:08:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:27.769 14:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.769 14:08:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:27.769 14:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.769 14:08:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:27.769 14:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.769 14:08:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:27.769 14:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.769 14:08:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:27.769 14:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.769 14:08:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:27.769 14:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.769 14:08:09 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:27.769 14:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.769 14:08:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:27.769 14:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.769 14:08:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:27.769 14:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.769 14:08:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:27.769 14:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.769 14:08:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:27.769 14:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.769 14:08:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:27.769 14:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.769 14:08:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:27.769 14:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.769 14:08:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:27.769 14:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.769 14:08:09 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:27.769 14:08:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.769 14:08:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:27.769 14:08:09 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:27.769 14:08:09 -- setup/devices.sh@68 -- # return 0 00:03:27.769 14:08:09 -- setup/devices.sh@187 -- # cleanup_dm 00:03:27.769 14:08:09 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:28.028 14:08:09 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:28.028 14:08:09 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:28.028 14:08:09 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:28.028 14:08:09 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:28.029 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:28.029 14:08:09 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:28.029 14:08:09 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:28.029 00:03:28.029 real 0m5.241s 00:03:28.029 user 0m0.860s 00:03:28.029 sys 0m1.362s 00:03:28.029 14:08:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:28.029 14:08:09 -- common/autotest_common.sh@10 -- # set +x 00:03:28.029 ************************************ 00:03:28.029 END TEST dm_mount 00:03:28.029 ************************************ 00:03:28.029 14:08:09 -- setup/devices.sh@1 -- # cleanup 00:03:28.029 14:08:09 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:28.029 14:08:09 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.029 14:08:09 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:28.029 14:08:09 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:28.029 14:08:09 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:28.029 14:08:09 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:28.288 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:28.288 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:28.288 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:28.288 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:28.288 14:08:09 -- setup/devices.sh@12 -- # cleanup_dm 00:03:28.288 14:08:09 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:28.288 14:08:09 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:28.288 14:08:09 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:28.288 14:08:09 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:28.288 14:08:09 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:28.288 14:08:09 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:28.288 00:03:28.288 real 0m12.538s 00:03:28.288 user 0m2.777s 00:03:28.288 sys 0m4.260s 00:03:28.288 14:08:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:28.288 14:08:09 -- common/autotest_common.sh@10 -- # set +x 00:03:28.288 ************************************ 00:03:28.288 END TEST devices 00:03:28.288 ************************************ 00:03:28.288 00:03:28.288 real 0m38.519s 00:03:28.288 user 0m11.202s 00:03:28.288 sys 0m16.654s 00:03:28.288 14:08:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:28.288 14:08:09 -- common/autotest_common.sh@10 -- # set +x 00:03:28.288 ************************************ 00:03:28.288 END TEST setup.sh 00:03:28.288 ************************************ 00:03:28.288 14:08:09 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:29.224 Hugepages 00:03:29.224 node hugesize free / total 00:03:29.224 node0 1048576kB 0 / 0 00:03:29.224 node0 2048kB 2048 / 2048 00:03:29.224 node1 1048576kB 0 / 0 00:03:29.224 node1 2048kB 0 / 0 00:03:29.224 00:03:29.224 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:29.224 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:03:29.224 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:03:29.224 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:03:29.224 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:03:29.224 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:03:29.224 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:03:29.224 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:03:29.224 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:03:29.224 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:03:29.224 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:03:29.224 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:03:29.224 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:03:29.224 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:03:29.224 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:03:29.224 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:03:29.224 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:03:29.483 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:29.483 14:08:10 -- spdk/autotest.sh@130 -- # uname -s 00:03:29.483 14:08:10 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:29.483 14:08:10 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:29.483 14:08:10 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.420 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:30.420 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:30.420 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:30.420 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:30.420 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:30.420 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:30.420 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:30.420 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:30.420 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:30.420 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:30.420 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:30.420 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:30.420 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:30.420 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:30.420 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:30.420 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:31.358 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:03:31.617 14:08:12 -- common/autotest_common.sh@1518 -- # sleep 1 00:03:32.557 14:08:13 -- common/autotest_common.sh@1519 -- # bdfs=() 00:03:32.557 14:08:13 -- common/autotest_common.sh@1519 -- # local bdfs 00:03:32.557 14:08:13 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:32.557 14:08:13 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:32.557 14:08:13 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:32.557 14:08:13 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:32.557 14:08:13 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:32.557 14:08:13 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:32.557 14:08:13 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:32.557 14:08:13 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:32.557 14:08:13 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:84:00.0 00:03:32.557 14:08:13 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.494 Waiting for block devices as requested 00:03:33.494 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:03:33.494 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:03:33.494 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:03:33.754 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:03:33.754 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:03:33.754 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:03:33.754 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:03:34.014 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:03:34.014 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:03:34.014 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:03:34.014 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:03:34.274 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:03:34.274 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:03:34.274 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:03:34.533 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:03:34.533 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:03:34.533 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:03:34.533 14:08:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:34.533 14:08:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:84:00.0 00:03:34.533 14:08:16 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:03:34.533 14:08:16 -- common/autotest_common.sh@1488 -- # grep 0000:84:00.0/nvme/nvme 00:03:34.533 14:08:16 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:03:34.533 14:08:16 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 ]] 00:03:34.533 14:08:16 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:03:34.533 14:08:16 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:03:34.533 14:08:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:34.533 14:08:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:34.533 14:08:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:34.533 14:08:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:34.533 14:08:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:34.533 14:08:16 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:34.533 14:08:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:34.533 14:08:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:34.533 14:08:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:34.533 14:08:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:34.533 14:08:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:34.533 14:08:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:34.533 14:08:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:34.533 14:08:16 -- common/autotest_common.sh@1543 -- # continue 00:03:34.533 14:08:16 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:34.533 14:08:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:34.533 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:03:34.533 14:08:16 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:34.533 14:08:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:34.533 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:03:34.792 14:08:16 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:35.731 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:35.731 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:35.731 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:35.731 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:35.731 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:35.731 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:35.731 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:35.731 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:35.731 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:35.731 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:35.731 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:35.731 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:35.731 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:35.731 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:35.731 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:35.731 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:36.669 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:03:36.669 14:08:18 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:36.669 14:08:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:36.669 14:08:18 -- common/autotest_common.sh@10 -- # set +x 00:03:36.669 14:08:18 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:36.669 14:08:18 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:03:36.669 14:08:18 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:03:36.669 14:08:18 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:36.669 14:08:18 -- common/autotest_common.sh@1563 -- # local bdfs 00:03:36.669 14:08:18 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:03:36.669 14:08:18 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:36.669 14:08:18 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:36.669 14:08:18 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:36.669 14:08:18 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:36.669 14:08:18 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:36.669 14:08:18 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:36.669 14:08:18 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:84:00.0 00:03:36.669 14:08:18 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:03:36.669 14:08:18 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:84:00.0/device 00:03:36.669 14:08:18 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:36.669 14:08:18 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:36.669 14:08:18 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:36.669 14:08:18 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:84:00.0 00:03:36.669 14:08:18 -- common/autotest_common.sh@1578 -- # [[ -z 0000:84:00.0 ]] 00:03:36.669 14:08:18 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=3056276 00:03:36.669 14:08:18 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:36.669 14:08:18 -- common/autotest_common.sh@1584 -- # waitforlisten 3056276 00:03:36.669 14:08:18 -- common/autotest_common.sh@817 -- # '[' -z 3056276 ']' 00:03:36.669 14:08:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:36.669 14:08:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:36.669 14:08:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:36.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:36.669 14:08:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:36.669 14:08:18 -- common/autotest_common.sh@10 -- # set +x 00:03:36.927 [2024-04-26 14:08:18.290484] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:03:36.927 [2024-04-26 14:08:18.290586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3056276 ] 00:03:36.927 EAL: No free 2048 kB hugepages reported on node 1 00:03:36.927 [2024-04-26 14:08:18.349621] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:36.927 [2024-04-26 14:08:18.467436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:37.186 14:08:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:37.186 14:08:18 -- common/autotest_common.sh@850 -- # return 0 00:03:37.186 14:08:18 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:03:37.186 14:08:18 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:03:37.186 14:08:18 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:84:00.0 00:03:40.476 nvme0n1 00:03:40.476 14:08:21 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:40.735 [2024-04-26 14:08:22.070692] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:40.735 [2024-04-26 14:08:22.070738] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:40.735 request: 00:03:40.735 { 00:03:40.735 "nvme_ctrlr_name": "nvme0", 00:03:40.735 "password": "test", 00:03:40.735 "method": "bdev_nvme_opal_revert", 00:03:40.735 "req_id": 1 00:03:40.735 } 00:03:40.735 Got JSON-RPC error response 00:03:40.735 response: 00:03:40.735 { 00:03:40.735 "code": -32603, 00:03:40.735 "message": "Internal error" 00:03:40.735 } 00:03:40.735 14:08:22 -- common/autotest_common.sh@1590 -- # true 00:03:40.735 14:08:22 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:03:40.735 14:08:22 -- common/autotest_common.sh@1594 -- # killprocess 3056276 00:03:40.735 14:08:22 -- common/autotest_common.sh@936 -- # '[' -z 3056276 ']' 00:03:40.735 14:08:22 -- common/autotest_common.sh@940 -- # kill -0 3056276 00:03:40.735 14:08:22 -- common/autotest_common.sh@941 -- # uname 00:03:40.735 14:08:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:40.735 14:08:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3056276 00:03:40.735 14:08:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:40.735 14:08:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:40.735 14:08:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3056276' 00:03:40.735 killing process with pid 3056276 00:03:40.735 14:08:22 -- common/autotest_common.sh@955 -- # kill 3056276 00:03:40.735 14:08:22 -- common/autotest_common.sh@960 -- # wait 3056276 00:03:42.634 14:08:23 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:42.634 14:08:23 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:42.634 14:08:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:42.634 14:08:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:42.634 14:08:23 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:42.634 14:08:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:42.634 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:03:42.634 14:08:23 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:42.634 14:08:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:42.634 14:08:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:42.634 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:03:42.634 ************************************ 00:03:42.634 START TEST env 00:03:42.634 ************************************ 00:03:42.634 14:08:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:42.634 * Looking for test storage... 00:03:42.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:42.634 14:08:23 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:42.634 14:08:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:42.634 14:08:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:42.634 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:03:42.634 ************************************ 00:03:42.634 START TEST env_memory 00:03:42.634 ************************************ 00:03:42.634 14:08:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:42.634 00:03:42.634 00:03:42.634 CUnit - A unit testing framework for C - Version 2.1-3 00:03:42.634 http://cunit.sourceforge.net/ 00:03:42.634 00:03:42.634 00:03:42.634 Suite: memory 00:03:42.634 Test: alloc and free memory map ...[2024-04-26 14:08:24.095713] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:42.634 passed 00:03:42.634 Test: mem map translation ...[2024-04-26 14:08:24.130881] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:42.634 [2024-04-26 14:08:24.130908] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:42.634 [2024-04-26 14:08:24.130968] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:42.634 [2024-04-26 14:08:24.130983] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:42.634 passed 00:03:42.634 Test: mem map registration ...[2024-04-26 14:08:24.195096] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:42.634 [2024-04-26 14:08:24.195118] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:42.894 passed 00:03:42.894 Test: mem map adjacent registrations ...passed 00:03:42.894 00:03:42.894 Run Summary: Type Total Ran Passed Failed Inactive 00:03:42.894 suites 1 1 n/a 0 0 00:03:42.894 tests 4 4 4 0 0 00:03:42.894 asserts 152 152 152 0 n/a 00:03:42.894 00:03:42.894 Elapsed time = 0.222 seconds 00:03:42.894 00:03:42.894 real 0m0.231s 00:03:42.894 user 0m0.223s 00:03:42.894 sys 0m0.007s 00:03:42.894 14:08:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:42.894 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:03:42.894 ************************************ 00:03:42.894 END TEST env_memory 00:03:42.894 ************************************ 00:03:42.894 14:08:24 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:42.894 14:08:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:42.894 14:08:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:42.894 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:03:42.894 ************************************ 00:03:42.894 START TEST env_vtophys 00:03:42.894 ************************************ 00:03:42.894 14:08:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:42.894 EAL: lib.eal log level changed from notice to debug 00:03:42.894 EAL: Detected lcore 0 as core 0 on socket 0 00:03:42.894 EAL: Detected lcore 1 as core 1 on socket 0 00:03:42.894 EAL: Detected lcore 2 as core 2 on socket 0 00:03:42.894 EAL: Detected lcore 3 as core 3 on socket 0 00:03:42.894 EAL: Detected lcore 4 as core 4 on socket 0 00:03:42.894 EAL: Detected lcore 5 as core 5 on socket 0 00:03:42.894 EAL: Detected lcore 6 as core 6 on socket 0 00:03:42.894 EAL: Detected lcore 7 as core 7 on socket 0 00:03:42.894 EAL: Detected lcore 8 as core 0 on socket 1 00:03:42.894 EAL: Detected lcore 9 as core 1 on socket 1 00:03:42.894 EAL: Detected lcore 10 as core 2 on socket 1 00:03:42.894 EAL: Detected lcore 11 as core 3 on socket 1 00:03:42.894 EAL: Detected lcore 12 as core 4 on socket 1 00:03:42.894 EAL: Detected lcore 13 as core 5 on socket 1 00:03:42.894 EAL: Detected lcore 14 as core 6 on socket 1 00:03:42.894 EAL: Detected lcore 15 as core 7 on socket 1 00:03:42.894 EAL: Detected lcore 16 as core 0 on socket 0 00:03:42.894 EAL: Detected lcore 17 as core 1 on socket 0 00:03:42.894 EAL: Detected lcore 18 as core 2 on socket 0 00:03:42.894 EAL: Detected lcore 19 as core 3 on socket 0 00:03:42.894 EAL: Detected lcore 20 as core 4 on socket 0 00:03:42.894 EAL: Detected lcore 21 as core 5 on socket 0 00:03:42.894 EAL: Detected lcore 22 as core 6 on socket 0 00:03:42.894 EAL: Detected lcore 23 as core 7 on socket 0 00:03:42.894 EAL: Detected lcore 24 as core 0 on socket 1 00:03:42.894 EAL: Detected lcore 25 as core 1 on socket 1 00:03:42.894 EAL: Detected lcore 26 as core 2 on socket 1 00:03:42.894 EAL: Detected lcore 27 as core 3 on socket 1 00:03:42.894 EAL: Detected lcore 28 as core 4 on socket 1 00:03:42.894 EAL: Detected lcore 29 as core 5 on socket 1 00:03:42.894 EAL: Detected lcore 30 as core 6 on socket 1 00:03:42.894 EAL: Detected lcore 31 as core 7 on socket 1 00:03:42.894 EAL: Maximum logical cores by configuration: 128 00:03:42.894 EAL: Detected CPU lcores: 32 00:03:42.894 EAL: Detected NUMA nodes: 2 00:03:42.894 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:42.894 EAL: Detected shared linkage of DPDK 00:03:42.894 EAL: No shared files mode enabled, IPC will be disabled 00:03:42.894 EAL: Bus pci wants IOVA as 'DC' 00:03:42.894 EAL: Buses did not request a specific IOVA mode. 00:03:42.894 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:42.894 EAL: Selected IOVA mode 'VA' 00:03:42.894 EAL: No free 2048 kB hugepages reported on node 1 00:03:42.894 EAL: Probing VFIO support... 00:03:42.894 EAL: IOMMU type 1 (Type 1) is supported 00:03:42.894 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:42.894 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:42.894 EAL: VFIO support initialized 00:03:42.894 EAL: Ask a virtual area of 0x2e000 bytes 00:03:42.894 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:42.894 EAL: Setting up physically contiguous memory... 00:03:42.894 EAL: Setting maximum number of open files to 524288 00:03:42.894 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:42.894 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:42.894 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:42.894 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.894 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:42.894 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.894 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.894 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:42.894 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:42.894 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.894 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:42.894 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.894 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.894 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:42.894 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:42.894 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.894 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:42.894 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.894 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.894 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:42.894 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:42.894 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.894 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:42.894 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:42.894 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.894 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:42.894 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:42.894 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:42.894 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.894 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:42.894 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:42.894 EAL: Ask a virtual area of 0x400000000 bytes 00:03:42.894 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:42.894 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:42.894 EAL: Ask a virtual area of 0x61000 bytes 00:03:42.894 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:43.153 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.153 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:43.153 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:43.153 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.153 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:43.153 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.153 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:43.153 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:43.153 EAL: Ask a virtual area of 0x61000 bytes 00:03:43.153 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:43.153 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:43.153 EAL: Ask a virtual area of 0x400000000 bytes 00:03:43.153 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:43.153 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:43.153 EAL: Hugepages will be freed exactly as allocated. 00:03:43.153 EAL: No shared files mode enabled, IPC is disabled 00:03:43.153 EAL: No shared files mode enabled, IPC is disabled 00:03:43.153 EAL: TSC frequency is ~2700000 KHz 00:03:43.153 EAL: Main lcore 0 is ready (tid=7efc9ada9a00;cpuset=[0]) 00:03:43.154 EAL: Trying to obtain current memory policy. 00:03:43.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.154 EAL: Restoring previous memory policy: 0 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was expanded by 2MB 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:43.154 EAL: Mem event callback 'spdk:(nil)' registered 00:03:43.154 00:03:43.154 00:03:43.154 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.154 http://cunit.sourceforge.net/ 00:03:43.154 00:03:43.154 00:03:43.154 Suite: components_suite 00:03:43.154 Test: vtophys_malloc_test ...passed 00:03:43.154 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:43.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.154 EAL: Restoring previous memory policy: 4 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was expanded by 4MB 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was shrunk by 4MB 00:03:43.154 EAL: Trying to obtain current memory policy. 00:03:43.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.154 EAL: Restoring previous memory policy: 4 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was expanded by 6MB 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was shrunk by 6MB 00:03:43.154 EAL: Trying to obtain current memory policy. 00:03:43.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.154 EAL: Restoring previous memory policy: 4 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was expanded by 10MB 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was shrunk by 10MB 00:03:43.154 EAL: Trying to obtain current memory policy. 00:03:43.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.154 EAL: Restoring previous memory policy: 4 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was expanded by 18MB 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was shrunk by 18MB 00:03:43.154 EAL: Trying to obtain current memory policy. 00:03:43.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.154 EAL: Restoring previous memory policy: 4 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was expanded by 34MB 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was shrunk by 34MB 00:03:43.154 EAL: Trying to obtain current memory policy. 00:03:43.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.154 EAL: Restoring previous memory policy: 4 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was expanded by 66MB 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was shrunk by 66MB 00:03:43.154 EAL: Trying to obtain current memory policy. 00:03:43.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.154 EAL: Restoring previous memory policy: 4 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was expanded by 130MB 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was shrunk by 130MB 00:03:43.154 EAL: Trying to obtain current memory policy. 00:03:43.154 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.154 EAL: Restoring previous memory policy: 4 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.154 EAL: request: mp_malloc_sync 00:03:43.154 EAL: No shared files mode enabled, IPC is disabled 00:03:43.154 EAL: Heap on socket 0 was expanded by 258MB 00:03:43.154 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.413 EAL: request: mp_malloc_sync 00:03:43.413 EAL: No shared files mode enabled, IPC is disabled 00:03:43.413 EAL: Heap on socket 0 was shrunk by 258MB 00:03:43.413 EAL: Trying to obtain current memory policy. 00:03:43.413 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.413 EAL: Restoring previous memory policy: 4 00:03:43.413 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.413 EAL: request: mp_malloc_sync 00:03:43.413 EAL: No shared files mode enabled, IPC is disabled 00:03:43.413 EAL: Heap on socket 0 was expanded by 514MB 00:03:43.413 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.413 EAL: request: mp_malloc_sync 00:03:43.413 EAL: No shared files mode enabled, IPC is disabled 00:03:43.413 EAL: Heap on socket 0 was shrunk by 514MB 00:03:43.413 EAL: Trying to obtain current memory policy. 00:03:43.413 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:43.671 EAL: Restoring previous memory policy: 4 00:03:43.671 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.671 EAL: request: mp_malloc_sync 00:03:43.671 EAL: No shared files mode enabled, IPC is disabled 00:03:43.671 EAL: Heap on socket 0 was expanded by 1026MB 00:03:43.930 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.930 EAL: request: mp_malloc_sync 00:03:43.930 EAL: No shared files mode enabled, IPC is disabled 00:03:43.930 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:43.930 passed 00:03:43.930 00:03:43.930 Run Summary: Type Total Ran Passed Failed Inactive 00:03:43.930 suites 1 1 n/a 0 0 00:03:43.930 tests 2 2 2 0 0 00:03:43.930 asserts 497 497 497 0 n/a 00:03:43.930 00:03:43.930 Elapsed time = 0.943 seconds 00:03:43.930 EAL: Calling mem event callback 'spdk:(nil)' 00:03:43.930 EAL: request: mp_malloc_sync 00:03:43.930 EAL: No shared files mode enabled, IPC is disabled 00:03:43.930 EAL: Heap on socket 0 was shrunk by 2MB 00:03:43.930 EAL: No shared files mode enabled, IPC is disabled 00:03:43.930 EAL: No shared files mode enabled, IPC is disabled 00:03:43.930 EAL: No shared files mode enabled, IPC is disabled 00:03:43.930 00:03:43.930 real 0m1.050s 00:03:43.930 user 0m0.518s 00:03:43.930 sys 0m0.503s 00:03:43.930 14:08:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:43.930 14:08:25 -- common/autotest_common.sh@10 -- # set +x 00:03:43.930 ************************************ 00:03:43.930 END TEST env_vtophys 00:03:43.930 ************************************ 00:03:43.930 14:08:25 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:43.930 14:08:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.930 14:08:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.930 14:08:25 -- common/autotest_common.sh@10 -- # set +x 00:03:44.188 ************************************ 00:03:44.188 START TEST env_pci 00:03:44.188 ************************************ 00:03:44.188 14:08:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:44.188 00:03:44.188 00:03:44.188 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.188 http://cunit.sourceforge.net/ 00:03:44.188 00:03:44.188 00:03:44.188 Suite: pci 00:03:44.188 Test: pci_hook ...[2024-04-26 14:08:25.623542] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3056998 has claimed it 00:03:44.188 EAL: Cannot find device (10000:00:01.0) 00:03:44.188 EAL: Failed to attach device on primary process 00:03:44.188 passed 00:03:44.188 00:03:44.188 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.188 suites 1 1 n/a 0 0 00:03:44.188 tests 1 1 1 0 0 00:03:44.188 asserts 25 25 25 0 n/a 00:03:44.188 00:03:44.189 Elapsed time = 0.017 seconds 00:03:44.189 00:03:44.189 real 0m0.031s 00:03:44.189 user 0m0.013s 00:03:44.189 sys 0m0.018s 00:03:44.189 14:08:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:44.189 14:08:25 -- common/autotest_common.sh@10 -- # set +x 00:03:44.189 ************************************ 00:03:44.189 END TEST env_pci 00:03:44.189 ************************************ 00:03:44.189 14:08:25 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:44.189 14:08:25 -- env/env.sh@15 -- # uname 00:03:44.189 14:08:25 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:44.189 14:08:25 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:44.189 14:08:25 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:44.189 14:08:25 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:03:44.189 14:08:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:44.189 14:08:25 -- common/autotest_common.sh@10 -- # set +x 00:03:44.449 ************************************ 00:03:44.449 START TEST env_dpdk_post_init 00:03:44.449 ************************************ 00:03:44.449 14:08:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:44.449 EAL: Detected CPU lcores: 32 00:03:44.449 EAL: Detected NUMA nodes: 2 00:03:44.449 EAL: Detected shared linkage of DPDK 00:03:44.449 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:44.449 EAL: Selected IOVA mode 'VA' 00:03:44.449 EAL: No free 2048 kB hugepages reported on node 1 00:03:44.449 EAL: VFIO support initialized 00:03:44.449 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:44.449 EAL: Using IOMMU type 1 (Type 1) 00:03:44.449 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:00:04.0 (socket 0) 00:03:44.449 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:00:04.1 (socket 0) 00:03:44.449 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:00:04.2 (socket 0) 00:03:44.449 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:00:04.3 (socket 0) 00:03:44.449 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:00:04.4 (socket 0) 00:03:44.449 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:00:04.5 (socket 0) 00:03:44.449 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:00:04.6 (socket 0) 00:03:44.449 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:00:04.7 (socket 0) 00:03:44.449 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:80:04.0 (socket 1) 00:03:44.449 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:80:04.1 (socket 1) 00:03:44.708 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:80:04.2 (socket 1) 00:03:44.708 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:80:04.3 (socket 1) 00:03:44.708 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:80:04.4 (socket 1) 00:03:44.708 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:80:04.5 (socket 1) 00:03:44.708 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:80:04.6 (socket 1) 00:03:44.708 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:80:04.7 (socket 1) 00:03:45.277 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:84:00.0 (socket 1) 00:03:48.577 EAL: Releasing PCI mapped resource for 0000:84:00.0 00:03:48.577 EAL: Calling pci_unmap_resource for 0000:84:00.0 at 0x202001040000 00:03:48.836 Starting DPDK initialization... 00:03:48.836 Starting SPDK post initialization... 00:03:48.836 SPDK NVMe probe 00:03:48.836 Attaching to 0000:84:00.0 00:03:48.836 Attached to 0000:84:00.0 00:03:48.836 Cleaning up... 00:03:48.836 00:03:48.836 real 0m4.380s 00:03:48.836 user 0m3.274s 00:03:48.836 sys 0m0.170s 00:03:48.836 14:08:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:48.836 14:08:30 -- common/autotest_common.sh@10 -- # set +x 00:03:48.836 ************************************ 00:03:48.836 END TEST env_dpdk_post_init 00:03:48.836 ************************************ 00:03:48.836 14:08:30 -- env/env.sh@26 -- # uname 00:03:48.836 14:08:30 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:48.836 14:08:30 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:48.836 14:08:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.836 14:08:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.836 14:08:30 -- common/autotest_common.sh@10 -- # set +x 00:03:48.836 ************************************ 00:03:48.836 START TEST env_mem_callbacks 00:03:48.836 ************************************ 00:03:48.836 14:08:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:48.836 EAL: Detected CPU lcores: 32 00:03:48.836 EAL: Detected NUMA nodes: 2 00:03:48.836 EAL: Detected shared linkage of DPDK 00:03:48.836 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:48.836 EAL: Selected IOVA mode 'VA' 00:03:48.836 EAL: No free 2048 kB hugepages reported on node 1 00:03:48.836 EAL: VFIO support initialized 00:03:48.836 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:48.836 00:03:48.836 00:03:48.836 CUnit - A unit testing framework for C - Version 2.1-3 00:03:48.836 http://cunit.sourceforge.net/ 00:03:48.836 00:03:48.836 00:03:48.836 Suite: memory 00:03:48.836 Test: test ... 00:03:48.836 register 0x200000200000 2097152 00:03:48.836 malloc 3145728 00:03:48.836 register 0x200000400000 4194304 00:03:48.836 buf 0x200000500000 len 3145728 PASSED 00:03:48.836 malloc 64 00:03:48.836 buf 0x2000004fff40 len 64 PASSED 00:03:48.836 malloc 4194304 00:03:48.836 register 0x200000800000 6291456 00:03:48.836 buf 0x200000a00000 len 4194304 PASSED 00:03:48.836 free 0x200000500000 3145728 00:03:48.836 free 0x2000004fff40 64 00:03:48.836 unregister 0x200000400000 4194304 PASSED 00:03:48.836 free 0x200000a00000 4194304 00:03:48.836 unregister 0x200000800000 6291456 PASSED 00:03:48.836 malloc 8388608 00:03:48.836 register 0x200000400000 10485760 00:03:48.836 buf 0x200000600000 len 8388608 PASSED 00:03:48.836 free 0x200000600000 8388608 00:03:48.836 unregister 0x200000400000 10485760 PASSED 00:03:48.836 passed 00:03:48.836 00:03:48.836 Run Summary: Type Total Ran Passed Failed Inactive 00:03:48.836 suites 1 1 n/a 0 0 00:03:48.836 tests 1 1 1 0 0 00:03:48.836 asserts 15 15 15 0 n/a 00:03:48.836 00:03:48.836 Elapsed time = 0.005 seconds 00:03:48.836 00:03:48.836 real 0m0.046s 00:03:48.836 user 0m0.020s 00:03:48.836 sys 0m0.026s 00:03:48.836 14:08:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:48.836 14:08:30 -- common/autotest_common.sh@10 -- # set +x 00:03:48.836 ************************************ 00:03:48.836 END TEST env_mem_callbacks 00:03:48.836 ************************************ 00:03:48.836 00:03:48.836 real 0m6.474s 00:03:48.836 user 0m4.297s 00:03:48.836 sys 0m1.139s 00:03:48.836 14:08:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:48.836 14:08:30 -- common/autotest_common.sh@10 -- # set +x 00:03:48.836 ************************************ 00:03:48.836 END TEST env 00:03:48.836 ************************************ 00:03:48.836 14:08:30 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:48.836 14:08:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.836 14:08:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.837 14:08:30 -- common/autotest_common.sh@10 -- # set +x 00:03:49.095 ************************************ 00:03:49.095 START TEST rpc 00:03:49.095 ************************************ 00:03:49.095 14:08:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:49.095 * Looking for test storage... 00:03:49.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:49.095 14:08:30 -- rpc/rpc.sh@65 -- # spdk_pid=3057639 00:03:49.095 14:08:30 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:49.095 14:08:30 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:49.095 14:08:30 -- rpc/rpc.sh@67 -- # waitforlisten 3057639 00:03:49.095 14:08:30 -- common/autotest_common.sh@817 -- # '[' -z 3057639 ']' 00:03:49.095 14:08:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:49.095 14:08:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:49.095 14:08:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:49.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:49.095 14:08:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:49.095 14:08:30 -- common/autotest_common.sh@10 -- # set +x 00:03:49.095 [2024-04-26 14:08:30.619501] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:03:49.095 [2024-04-26 14:08:30.619613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3057639 ] 00:03:49.095 EAL: No free 2048 kB hugepages reported on node 1 00:03:49.353 [2024-04-26 14:08:30.695296] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.353 [2024-04-26 14:08:30.844471] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:49.353 [2024-04-26 14:08:30.844546] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3057639' to capture a snapshot of events at runtime. 00:03:49.353 [2024-04-26 14:08:30.844577] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:49.353 [2024-04-26 14:08:30.844604] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:49.353 [2024-04-26 14:08:30.844627] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3057639 for offline analysis/debug. 00:03:49.353 [2024-04-26 14:08:30.844709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.288 14:08:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:50.288 14:08:31 -- common/autotest_common.sh@850 -- # return 0 00:03:50.288 14:08:31 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:50.288 14:08:31 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:50.288 14:08:31 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:50.288 14:08:31 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:50.288 14:08:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:50.288 14:08:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:50.288 14:08:31 -- common/autotest_common.sh@10 -- # set +x 00:03:50.288 ************************************ 00:03:50.288 START TEST rpc_integrity 00:03:50.288 ************************************ 00:03:50.288 14:08:31 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:03:50.288 14:08:31 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:50.288 14:08:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:50.288 14:08:31 -- common/autotest_common.sh@10 -- # set +x 00:03:50.288 14:08:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:50.288 14:08:31 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:50.288 14:08:31 -- rpc/rpc.sh@13 -- # jq length 00:03:50.288 14:08:31 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:50.288 14:08:31 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:50.288 14:08:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:50.288 14:08:31 -- common/autotest_common.sh@10 -- # set +x 00:03:50.288 14:08:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:50.288 14:08:31 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:50.288 14:08:31 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:50.288 14:08:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:50.288 14:08:31 -- common/autotest_common.sh@10 -- # set +x 00:03:50.288 14:08:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:50.288 14:08:31 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:50.288 { 00:03:50.288 "name": "Malloc0", 00:03:50.288 "aliases": [ 00:03:50.288 "87d353ee-744e-4d6e-88d2-ad20273f4b8d" 00:03:50.288 ], 00:03:50.288 "product_name": "Malloc disk", 00:03:50.288 "block_size": 512, 00:03:50.288 "num_blocks": 16384, 00:03:50.288 "uuid": "87d353ee-744e-4d6e-88d2-ad20273f4b8d", 00:03:50.288 "assigned_rate_limits": { 00:03:50.288 "rw_ios_per_sec": 0, 00:03:50.288 "rw_mbytes_per_sec": 0, 00:03:50.288 "r_mbytes_per_sec": 0, 00:03:50.288 "w_mbytes_per_sec": 0 00:03:50.288 }, 00:03:50.288 "claimed": false, 00:03:50.288 "zoned": false, 00:03:50.288 "supported_io_types": { 00:03:50.288 "read": true, 00:03:50.288 "write": true, 00:03:50.288 "unmap": true, 00:03:50.288 "write_zeroes": true, 00:03:50.288 "flush": true, 00:03:50.288 "reset": true, 00:03:50.288 "compare": false, 00:03:50.288 "compare_and_write": false, 00:03:50.288 "abort": true, 00:03:50.288 "nvme_admin": false, 00:03:50.288 "nvme_io": false 00:03:50.288 }, 00:03:50.288 "memory_domains": [ 00:03:50.288 { 00:03:50.288 "dma_device_id": "system", 00:03:50.288 "dma_device_type": 1 00:03:50.288 }, 00:03:50.288 { 00:03:50.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.288 "dma_device_type": 2 00:03:50.288 } 00:03:50.288 ], 00:03:50.288 "driver_specific": {} 00:03:50.288 } 00:03:50.288 ]' 00:03:50.288 14:08:31 -- rpc/rpc.sh@17 -- # jq length 00:03:50.546 14:08:31 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:50.546 14:08:31 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:50.546 14:08:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:50.546 14:08:31 -- common/autotest_common.sh@10 -- # set +x 00:03:50.546 [2024-04-26 14:08:31.868378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:50.546 [2024-04-26 14:08:31.868428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:50.546 [2024-04-26 14:08:31.868452] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe6d780 00:03:50.546 [2024-04-26 14:08:31.868468] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:50.546 [2024-04-26 14:08:31.870026] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:50.546 [2024-04-26 14:08:31.870053] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:50.546 Passthru0 00:03:50.546 14:08:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:50.546 14:08:31 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:50.546 14:08:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:50.546 14:08:31 -- common/autotest_common.sh@10 -- # set +x 00:03:50.546 14:08:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:50.546 14:08:31 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:50.546 { 00:03:50.546 "name": "Malloc0", 00:03:50.546 "aliases": [ 00:03:50.546 "87d353ee-744e-4d6e-88d2-ad20273f4b8d" 00:03:50.546 ], 00:03:50.546 "product_name": "Malloc disk", 00:03:50.546 "block_size": 512, 00:03:50.546 "num_blocks": 16384, 00:03:50.546 "uuid": "87d353ee-744e-4d6e-88d2-ad20273f4b8d", 00:03:50.546 "assigned_rate_limits": { 00:03:50.546 "rw_ios_per_sec": 0, 00:03:50.546 "rw_mbytes_per_sec": 0, 00:03:50.546 "r_mbytes_per_sec": 0, 00:03:50.546 "w_mbytes_per_sec": 0 00:03:50.546 }, 00:03:50.546 "claimed": true, 00:03:50.546 "claim_type": "exclusive_write", 00:03:50.546 "zoned": false, 00:03:50.546 "supported_io_types": { 00:03:50.546 "read": true, 00:03:50.546 "write": true, 00:03:50.546 "unmap": true, 00:03:50.546 "write_zeroes": true, 00:03:50.546 "flush": true, 00:03:50.546 "reset": true, 00:03:50.546 "compare": false, 00:03:50.546 "compare_and_write": false, 00:03:50.546 "abort": true, 00:03:50.546 "nvme_admin": false, 00:03:50.546 "nvme_io": false 00:03:50.546 }, 00:03:50.546 "memory_domains": [ 00:03:50.546 { 00:03:50.546 "dma_device_id": "system", 00:03:50.546 "dma_device_type": 1 00:03:50.546 }, 00:03:50.546 { 00:03:50.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.546 "dma_device_type": 2 00:03:50.546 } 00:03:50.546 ], 00:03:50.546 "driver_specific": {} 00:03:50.546 }, 00:03:50.546 { 00:03:50.546 "name": "Passthru0", 00:03:50.546 "aliases": [ 00:03:50.546 "bbcb03f3-0d51-5e31-a014-e3ae83c50e47" 00:03:50.546 ], 00:03:50.546 "product_name": "passthru", 00:03:50.546 "block_size": 512, 00:03:50.546 "num_blocks": 16384, 00:03:50.546 "uuid": "bbcb03f3-0d51-5e31-a014-e3ae83c50e47", 00:03:50.546 "assigned_rate_limits": { 00:03:50.546 "rw_ios_per_sec": 0, 00:03:50.546 "rw_mbytes_per_sec": 0, 00:03:50.546 "r_mbytes_per_sec": 0, 00:03:50.546 "w_mbytes_per_sec": 0 00:03:50.546 }, 00:03:50.546 "claimed": false, 00:03:50.546 "zoned": false, 00:03:50.546 "supported_io_types": { 00:03:50.546 "read": true, 00:03:50.546 "write": true, 00:03:50.546 "unmap": true, 00:03:50.546 "write_zeroes": true, 00:03:50.546 "flush": true, 00:03:50.546 "reset": true, 00:03:50.546 "compare": false, 00:03:50.546 "compare_and_write": false, 00:03:50.546 "abort": true, 00:03:50.546 "nvme_admin": false, 00:03:50.546 "nvme_io": false 00:03:50.546 }, 00:03:50.546 "memory_domains": [ 00:03:50.546 { 00:03:50.546 "dma_device_id": "system", 00:03:50.546 "dma_device_type": 1 00:03:50.546 }, 00:03:50.546 { 00:03:50.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.546 "dma_device_type": 2 00:03:50.546 } 00:03:50.546 ], 00:03:50.546 "driver_specific": { 00:03:50.546 "passthru": { 00:03:50.546 "name": "Passthru0", 00:03:50.546 "base_bdev_name": "Malloc0" 00:03:50.546 } 00:03:50.546 } 00:03:50.546 } 00:03:50.546 ]' 00:03:50.546 14:08:31 -- rpc/rpc.sh@21 -- # jq length 00:03:50.546 14:08:31 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:50.546 14:08:31 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:50.546 14:08:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:50.546 14:08:31 -- common/autotest_common.sh@10 -- # set +x 00:03:50.546 14:08:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:50.546 14:08:31 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:50.546 14:08:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:50.546 14:08:31 -- common/autotest_common.sh@10 -- # set +x 00:03:50.546 14:08:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:50.546 14:08:31 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:50.546 14:08:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:50.546 14:08:31 -- common/autotest_common.sh@10 -- # set +x 00:03:50.546 14:08:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:50.546 14:08:31 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:50.546 14:08:31 -- rpc/rpc.sh@26 -- # jq length 00:03:50.546 14:08:31 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:50.546 00:03:50.546 real 0m0.247s 00:03:50.546 user 0m0.162s 00:03:50.546 sys 0m0.024s 00:03:50.546 14:08:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:50.546 14:08:31 -- common/autotest_common.sh@10 -- # set +x 00:03:50.546 ************************************ 00:03:50.546 END TEST rpc_integrity 00:03:50.546 ************************************ 00:03:50.546 14:08:32 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:50.546 14:08:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:50.546 14:08:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:50.546 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:03:50.804 ************************************ 00:03:50.804 START TEST rpc_plugins 00:03:50.804 ************************************ 00:03:50.804 14:08:32 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:03:50.804 14:08:32 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:50.804 14:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:50.804 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:03:50.804 14:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:50.804 14:08:32 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:50.804 14:08:32 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:50.804 14:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:50.804 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:03:50.804 14:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:50.804 14:08:32 -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:50.804 { 00:03:50.804 "name": "Malloc1", 00:03:50.804 "aliases": [ 00:03:50.804 "cc7827fa-4281-4f2c-901f-6f6aa2fd8d39" 00:03:50.804 ], 00:03:50.804 "product_name": "Malloc disk", 00:03:50.804 "block_size": 4096, 00:03:50.804 "num_blocks": 256, 00:03:50.804 "uuid": "cc7827fa-4281-4f2c-901f-6f6aa2fd8d39", 00:03:50.804 "assigned_rate_limits": { 00:03:50.804 "rw_ios_per_sec": 0, 00:03:50.804 "rw_mbytes_per_sec": 0, 00:03:50.804 "r_mbytes_per_sec": 0, 00:03:50.804 "w_mbytes_per_sec": 0 00:03:50.804 }, 00:03:50.804 "claimed": false, 00:03:50.804 "zoned": false, 00:03:50.804 "supported_io_types": { 00:03:50.804 "read": true, 00:03:50.804 "write": true, 00:03:50.804 "unmap": true, 00:03:50.804 "write_zeroes": true, 00:03:50.804 "flush": true, 00:03:50.804 "reset": true, 00:03:50.804 "compare": false, 00:03:50.804 "compare_and_write": false, 00:03:50.804 "abort": true, 00:03:50.804 "nvme_admin": false, 00:03:50.804 "nvme_io": false 00:03:50.804 }, 00:03:50.804 "memory_domains": [ 00:03:50.804 { 00:03:50.804 "dma_device_id": "system", 00:03:50.804 "dma_device_type": 1 00:03:50.804 }, 00:03:50.804 { 00:03:50.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.804 "dma_device_type": 2 00:03:50.804 } 00:03:50.804 ], 00:03:50.804 "driver_specific": {} 00:03:50.804 } 00:03:50.804 ]' 00:03:50.804 14:08:32 -- rpc/rpc.sh@32 -- # jq length 00:03:50.804 14:08:32 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:50.804 14:08:32 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:50.804 14:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:50.804 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:03:50.804 14:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:50.804 14:08:32 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:50.804 14:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:50.804 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:03:50.804 14:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:50.804 14:08:32 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:50.804 14:08:32 -- rpc/rpc.sh@36 -- # jq length 00:03:50.804 14:08:32 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:50.804 00:03:50.804 real 0m0.126s 00:03:50.804 user 0m0.086s 00:03:50.804 sys 0m0.009s 00:03:50.804 14:08:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:50.804 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:03:50.804 ************************************ 00:03:50.804 END TEST rpc_plugins 00:03:50.804 ************************************ 00:03:50.804 14:08:32 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:50.804 14:08:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:50.804 14:08:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:50.804 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:03:50.804 ************************************ 00:03:50.804 START TEST rpc_trace_cmd_test 00:03:50.804 ************************************ 00:03:50.804 14:08:32 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:03:50.804 14:08:32 -- rpc/rpc.sh@40 -- # local info 00:03:50.804 14:08:32 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:50.804 14:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:50.804 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:03:51.062 14:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:51.062 14:08:32 -- rpc/rpc.sh@42 -- # info='{ 00:03:51.062 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3057639", 00:03:51.062 "tpoint_group_mask": "0x8", 00:03:51.062 "iscsi_conn": { 00:03:51.062 "mask": "0x2", 00:03:51.062 "tpoint_mask": "0x0" 00:03:51.062 }, 00:03:51.062 "scsi": { 00:03:51.062 "mask": "0x4", 00:03:51.062 "tpoint_mask": "0x0" 00:03:51.062 }, 00:03:51.062 "bdev": { 00:03:51.062 "mask": "0x8", 00:03:51.062 "tpoint_mask": "0xffffffffffffffff" 00:03:51.062 }, 00:03:51.062 "nvmf_rdma": { 00:03:51.062 "mask": "0x10", 00:03:51.062 "tpoint_mask": "0x0" 00:03:51.062 }, 00:03:51.062 "nvmf_tcp": { 00:03:51.062 "mask": "0x20", 00:03:51.062 "tpoint_mask": "0x0" 00:03:51.062 }, 00:03:51.062 "ftl": { 00:03:51.062 "mask": "0x40", 00:03:51.062 "tpoint_mask": "0x0" 00:03:51.062 }, 00:03:51.062 "blobfs": { 00:03:51.062 "mask": "0x80", 00:03:51.062 "tpoint_mask": "0x0" 00:03:51.062 }, 00:03:51.062 "dsa": { 00:03:51.062 "mask": "0x200", 00:03:51.062 "tpoint_mask": "0x0" 00:03:51.062 }, 00:03:51.062 "thread": { 00:03:51.062 "mask": "0x400", 00:03:51.062 "tpoint_mask": "0x0" 00:03:51.062 }, 00:03:51.062 "nvme_pcie": { 00:03:51.062 "mask": "0x800", 00:03:51.062 "tpoint_mask": "0x0" 00:03:51.062 }, 00:03:51.062 "iaa": { 00:03:51.062 "mask": "0x1000", 00:03:51.062 "tpoint_mask": "0x0" 00:03:51.062 }, 00:03:51.062 "nvme_tcp": { 00:03:51.062 "mask": "0x2000", 00:03:51.062 "tpoint_mask": "0x0" 00:03:51.062 }, 00:03:51.062 "bdev_nvme": { 00:03:51.062 "mask": "0x4000", 00:03:51.062 "tpoint_mask": "0x0" 00:03:51.062 }, 00:03:51.062 "sock": { 00:03:51.062 "mask": "0x8000", 00:03:51.062 "tpoint_mask": "0x0" 00:03:51.062 } 00:03:51.062 }' 00:03:51.062 14:08:32 -- rpc/rpc.sh@43 -- # jq length 00:03:51.062 14:08:32 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:51.062 14:08:32 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:51.062 14:08:32 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:51.062 14:08:32 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:51.062 14:08:32 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:51.062 14:08:32 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:51.062 14:08:32 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:51.062 14:08:32 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:51.062 14:08:32 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:51.062 00:03:51.062 real 0m0.220s 00:03:51.063 user 0m0.189s 00:03:51.063 sys 0m0.021s 00:03:51.063 14:08:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:51.063 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:03:51.063 ************************************ 00:03:51.063 END TEST rpc_trace_cmd_test 00:03:51.063 ************************************ 00:03:51.063 14:08:32 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:51.063 14:08:32 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:51.063 14:08:32 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:51.063 14:08:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:51.063 14:08:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.063 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:03:51.321 ************************************ 00:03:51.321 START TEST rpc_daemon_integrity 00:03:51.321 ************************************ 00:03:51.321 14:08:32 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:03:51.321 14:08:32 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:51.321 14:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:51.321 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:03:51.321 14:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:51.321 14:08:32 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:51.321 14:08:32 -- rpc/rpc.sh@13 -- # jq length 00:03:51.321 14:08:32 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:51.321 14:08:32 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:51.321 14:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:51.321 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:03:51.321 14:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:51.321 14:08:32 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:51.321 14:08:32 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:51.321 14:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:51.321 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:03:51.321 14:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:51.321 14:08:32 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:51.321 { 00:03:51.321 "name": "Malloc2", 00:03:51.321 "aliases": [ 00:03:51.321 "a69b891d-8d42-45fc-baac-014574aaf390" 00:03:51.321 ], 00:03:51.321 "product_name": "Malloc disk", 00:03:51.321 "block_size": 512, 00:03:51.321 "num_blocks": 16384, 00:03:51.321 "uuid": "a69b891d-8d42-45fc-baac-014574aaf390", 00:03:51.321 "assigned_rate_limits": { 00:03:51.321 "rw_ios_per_sec": 0, 00:03:51.321 "rw_mbytes_per_sec": 0, 00:03:51.321 "r_mbytes_per_sec": 0, 00:03:51.321 "w_mbytes_per_sec": 0 00:03:51.321 }, 00:03:51.321 "claimed": false, 00:03:51.321 "zoned": false, 00:03:51.321 "supported_io_types": { 00:03:51.321 "read": true, 00:03:51.321 "write": true, 00:03:51.321 "unmap": true, 00:03:51.321 "write_zeroes": true, 00:03:51.321 "flush": true, 00:03:51.321 "reset": true, 00:03:51.321 "compare": false, 00:03:51.321 "compare_and_write": false, 00:03:51.321 "abort": true, 00:03:51.321 "nvme_admin": false, 00:03:51.321 "nvme_io": false 00:03:51.321 }, 00:03:51.321 "memory_domains": [ 00:03:51.321 { 00:03:51.321 "dma_device_id": "system", 00:03:51.321 "dma_device_type": 1 00:03:51.321 }, 00:03:51.321 { 00:03:51.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.321 "dma_device_type": 2 00:03:51.321 } 00:03:51.321 ], 00:03:51.321 "driver_specific": {} 00:03:51.321 } 00:03:51.321 ]' 00:03:51.321 14:08:32 -- rpc/rpc.sh@17 -- # jq length 00:03:51.321 14:08:32 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:51.321 14:08:32 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:51.321 14:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:51.321 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:03:51.321 [2024-04-26 14:08:32.847330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:51.321 [2024-04-26 14:08:32.847378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:51.321 [2024-04-26 14:08:32.847403] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe6d9b0 00:03:51.321 [2024-04-26 14:08:32.847418] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:51.321 [2024-04-26 14:08:32.848819] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:51.321 [2024-04-26 14:08:32.848845] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:51.321 Passthru0 00:03:51.321 14:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:51.321 14:08:32 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:51.321 14:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:51.321 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:03:51.321 14:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:51.321 14:08:32 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:51.321 { 00:03:51.321 "name": "Malloc2", 00:03:51.321 "aliases": [ 00:03:51.321 "a69b891d-8d42-45fc-baac-014574aaf390" 00:03:51.321 ], 00:03:51.321 "product_name": "Malloc disk", 00:03:51.321 "block_size": 512, 00:03:51.321 "num_blocks": 16384, 00:03:51.321 "uuid": "a69b891d-8d42-45fc-baac-014574aaf390", 00:03:51.321 "assigned_rate_limits": { 00:03:51.321 "rw_ios_per_sec": 0, 00:03:51.321 "rw_mbytes_per_sec": 0, 00:03:51.321 "r_mbytes_per_sec": 0, 00:03:51.321 "w_mbytes_per_sec": 0 00:03:51.321 }, 00:03:51.321 "claimed": true, 00:03:51.321 "claim_type": "exclusive_write", 00:03:51.321 "zoned": false, 00:03:51.321 "supported_io_types": { 00:03:51.321 "read": true, 00:03:51.321 "write": true, 00:03:51.321 "unmap": true, 00:03:51.321 "write_zeroes": true, 00:03:51.321 "flush": true, 00:03:51.321 "reset": true, 00:03:51.321 "compare": false, 00:03:51.321 "compare_and_write": false, 00:03:51.321 "abort": true, 00:03:51.321 "nvme_admin": false, 00:03:51.321 "nvme_io": false 00:03:51.321 }, 00:03:51.321 "memory_domains": [ 00:03:51.321 { 00:03:51.321 "dma_device_id": "system", 00:03:51.321 "dma_device_type": 1 00:03:51.321 }, 00:03:51.321 { 00:03:51.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.321 "dma_device_type": 2 00:03:51.321 } 00:03:51.321 ], 00:03:51.321 "driver_specific": {} 00:03:51.321 }, 00:03:51.321 { 00:03:51.321 "name": "Passthru0", 00:03:51.321 "aliases": [ 00:03:51.321 "8ecce84a-b8b2-5359-b56e-27ad53240c82" 00:03:51.321 ], 00:03:51.321 "product_name": "passthru", 00:03:51.321 "block_size": 512, 00:03:51.321 "num_blocks": 16384, 00:03:51.321 "uuid": "8ecce84a-b8b2-5359-b56e-27ad53240c82", 00:03:51.321 "assigned_rate_limits": { 00:03:51.321 "rw_ios_per_sec": 0, 00:03:51.321 "rw_mbytes_per_sec": 0, 00:03:51.321 "r_mbytes_per_sec": 0, 00:03:51.321 "w_mbytes_per_sec": 0 00:03:51.321 }, 00:03:51.321 "claimed": false, 00:03:51.321 "zoned": false, 00:03:51.321 "supported_io_types": { 00:03:51.321 "read": true, 00:03:51.321 "write": true, 00:03:51.321 "unmap": true, 00:03:51.321 "write_zeroes": true, 00:03:51.321 "flush": true, 00:03:51.321 "reset": true, 00:03:51.321 "compare": false, 00:03:51.321 "compare_and_write": false, 00:03:51.321 "abort": true, 00:03:51.321 "nvme_admin": false, 00:03:51.321 "nvme_io": false 00:03:51.321 }, 00:03:51.321 "memory_domains": [ 00:03:51.321 { 00:03:51.321 "dma_device_id": "system", 00:03:51.321 "dma_device_type": 1 00:03:51.321 }, 00:03:51.321 { 00:03:51.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.321 "dma_device_type": 2 00:03:51.322 } 00:03:51.322 ], 00:03:51.322 "driver_specific": { 00:03:51.322 "passthru": { 00:03:51.322 "name": "Passthru0", 00:03:51.322 "base_bdev_name": "Malloc2" 00:03:51.322 } 00:03:51.322 } 00:03:51.322 } 00:03:51.322 ]' 00:03:51.322 14:08:32 -- rpc/rpc.sh@21 -- # jq length 00:03:51.580 14:08:32 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:51.580 14:08:32 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:51.580 14:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:51.580 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:03:51.580 14:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:51.580 14:08:32 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:51.580 14:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:51.580 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:03:51.580 14:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:51.580 14:08:32 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:51.580 14:08:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:51.580 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:03:51.580 14:08:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:51.580 14:08:32 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:51.580 14:08:32 -- rpc/rpc.sh@26 -- # jq length 00:03:51.580 14:08:32 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:51.580 00:03:51.580 real 0m0.260s 00:03:51.580 user 0m0.162s 00:03:51.580 sys 0m0.030s 00:03:51.580 14:08:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:51.580 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:03:51.580 ************************************ 00:03:51.580 END TEST rpc_daemon_integrity 00:03:51.580 ************************************ 00:03:51.580 14:08:33 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:51.580 14:08:33 -- rpc/rpc.sh@84 -- # killprocess 3057639 00:03:51.580 14:08:33 -- common/autotest_common.sh@936 -- # '[' -z 3057639 ']' 00:03:51.580 14:08:33 -- common/autotest_common.sh@940 -- # kill -0 3057639 00:03:51.580 14:08:33 -- common/autotest_common.sh@941 -- # uname 00:03:51.580 14:08:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:51.580 14:08:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3057639 00:03:51.580 14:08:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:51.580 14:08:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:51.580 14:08:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3057639' 00:03:51.580 killing process with pid 3057639 00:03:51.580 14:08:33 -- common/autotest_common.sh@955 -- # kill 3057639 00:03:51.580 14:08:33 -- common/autotest_common.sh@960 -- # wait 3057639 00:03:51.839 00:03:51.839 real 0m2.849s 00:03:51.839 user 0m3.792s 00:03:51.839 sys 0m0.763s 00:03:51.839 14:08:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:51.839 14:08:33 -- common/autotest_common.sh@10 -- # set +x 00:03:51.839 ************************************ 00:03:51.839 END TEST rpc 00:03:51.839 ************************************ 00:03:51.839 14:08:33 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:51.839 14:08:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:51.839 14:08:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.839 14:08:33 -- common/autotest_common.sh@10 -- # set +x 00:03:52.096 ************************************ 00:03:52.096 START TEST skip_rpc 00:03:52.096 ************************************ 00:03:52.096 14:08:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:52.096 * Looking for test storage... 00:03:52.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:52.096 14:08:33 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:52.096 14:08:33 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:52.096 14:08:33 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:52.096 14:08:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:52.096 14:08:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:52.096 14:08:33 -- common/autotest_common.sh@10 -- # set +x 00:03:52.096 ************************************ 00:03:52.096 START TEST skip_rpc 00:03:52.096 ************************************ 00:03:52.096 14:08:33 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:03:52.096 14:08:33 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3058147 00:03:52.096 14:08:33 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:52.096 14:08:33 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:52.096 14:08:33 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:52.355 [2024-04-26 14:08:33.722014] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:03:52.355 [2024-04-26 14:08:33.722115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3058147 ] 00:03:52.355 EAL: No free 2048 kB hugepages reported on node 1 00:03:52.355 [2024-04-26 14:08:33.797171] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.355 [2024-04-26 14:08:33.924129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.643 14:08:38 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:57.643 14:08:38 -- common/autotest_common.sh@638 -- # local es=0 00:03:57.643 14:08:38 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:57.643 14:08:38 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:03:57.643 14:08:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:57.643 14:08:38 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:03:57.643 14:08:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:03:57.643 14:08:38 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:03:57.643 14:08:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:57.643 14:08:38 -- common/autotest_common.sh@10 -- # set +x 00:03:57.643 14:08:38 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:03:57.643 14:08:38 -- common/autotest_common.sh@641 -- # es=1 00:03:57.643 14:08:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:03:57.643 14:08:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:03:57.643 14:08:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:03:57.643 14:08:38 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:57.643 14:08:38 -- rpc/skip_rpc.sh@23 -- # killprocess 3058147 00:03:57.643 14:08:38 -- common/autotest_common.sh@936 -- # '[' -z 3058147 ']' 00:03:57.643 14:08:38 -- common/autotest_common.sh@940 -- # kill -0 3058147 00:03:57.643 14:08:38 -- common/autotest_common.sh@941 -- # uname 00:03:57.643 14:08:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:57.643 14:08:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3058147 00:03:57.643 14:08:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:57.643 14:08:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:57.643 14:08:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3058147' 00:03:57.643 killing process with pid 3058147 00:03:57.643 14:08:38 -- common/autotest_common.sh@955 -- # kill 3058147 00:03:57.643 14:08:38 -- common/autotest_common.sh@960 -- # wait 3058147 00:03:57.643 00:03:57.643 real 0m5.362s 00:03:57.643 user 0m5.064s 00:03:57.643 sys 0m0.298s 00:03:57.643 14:08:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:57.643 14:08:39 -- common/autotest_common.sh@10 -- # set +x 00:03:57.643 ************************************ 00:03:57.643 END TEST skip_rpc 00:03:57.643 ************************************ 00:03:57.643 14:08:39 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:57.643 14:08:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:57.643 14:08:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:57.643 14:08:39 -- common/autotest_common.sh@10 -- # set +x 00:03:57.643 ************************************ 00:03:57.643 START TEST skip_rpc_with_json 00:03:57.643 ************************************ 00:03:57.643 14:08:39 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:03:57.643 14:08:39 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:57.643 14:08:39 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3058641 00:03:57.643 14:08:39 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:57.643 14:08:39 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:57.643 14:08:39 -- rpc/skip_rpc.sh@31 -- # waitforlisten 3058641 00:03:57.643 14:08:39 -- common/autotest_common.sh@817 -- # '[' -z 3058641 ']' 00:03:57.643 14:08:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:57.643 14:08:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:57.643 14:08:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:57.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:57.643 14:08:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:57.643 14:08:39 -- common/autotest_common.sh@10 -- # set +x 00:03:57.643 [2024-04-26 14:08:39.211338] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:03:57.643 [2024-04-26 14:08:39.211427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3058641 ] 00:03:57.902 EAL: No free 2048 kB hugepages reported on node 1 00:03:57.902 [2024-04-26 14:08:39.271903] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.902 [2024-04-26 14:08:39.389641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.160 14:08:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:58.160 14:08:39 -- common/autotest_common.sh@850 -- # return 0 00:03:58.160 14:08:39 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:58.160 14:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:58.160 14:08:39 -- common/autotest_common.sh@10 -- # set +x 00:03:58.160 [2024-04-26 14:08:39.623381] nvmf_rpc.c:2509:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:58.160 request: 00:03:58.160 { 00:03:58.160 "trtype": "tcp", 00:03:58.160 "method": "nvmf_get_transports", 00:03:58.160 "req_id": 1 00:03:58.160 } 00:03:58.160 Got JSON-RPC error response 00:03:58.160 response: 00:03:58.160 { 00:03:58.160 "code": -19, 00:03:58.160 "message": "No such device" 00:03:58.160 } 00:03:58.160 14:08:39 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:03:58.160 14:08:39 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:58.160 14:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:58.160 14:08:39 -- common/autotest_common.sh@10 -- # set +x 00:03:58.160 [2024-04-26 14:08:39.631496] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:58.160 14:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:58.160 14:08:39 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:58.161 14:08:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:03:58.161 14:08:39 -- common/autotest_common.sh@10 -- # set +x 00:03:58.419 14:08:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:03:58.419 14:08:39 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:58.419 { 00:03:58.419 "subsystems": [ 00:03:58.419 { 00:03:58.419 "subsystem": "vfio_user_target", 00:03:58.419 "config": null 00:03:58.419 }, 00:03:58.419 { 00:03:58.419 "subsystem": "keyring", 00:03:58.419 "config": [] 00:03:58.419 }, 00:03:58.419 { 00:03:58.419 "subsystem": "iobuf", 00:03:58.419 "config": [ 00:03:58.419 { 00:03:58.419 "method": "iobuf_set_options", 00:03:58.419 "params": { 00:03:58.419 "small_pool_count": 8192, 00:03:58.419 "large_pool_count": 1024, 00:03:58.419 "small_bufsize": 8192, 00:03:58.419 "large_bufsize": 135168 00:03:58.419 } 00:03:58.419 } 00:03:58.419 ] 00:03:58.419 }, 00:03:58.419 { 00:03:58.419 "subsystem": "sock", 00:03:58.419 "config": [ 00:03:58.419 { 00:03:58.419 "method": "sock_impl_set_options", 00:03:58.419 "params": { 00:03:58.419 "impl_name": "posix", 00:03:58.419 "recv_buf_size": 2097152, 00:03:58.419 "send_buf_size": 2097152, 00:03:58.419 "enable_recv_pipe": true, 00:03:58.419 "enable_quickack": false, 00:03:58.419 "enable_placement_id": 0, 00:03:58.419 "enable_zerocopy_send_server": true, 00:03:58.419 "enable_zerocopy_send_client": false, 00:03:58.419 "zerocopy_threshold": 0, 00:03:58.419 "tls_version": 0, 00:03:58.419 "enable_ktls": false 00:03:58.419 } 00:03:58.419 }, 00:03:58.419 { 00:03:58.419 "method": "sock_impl_set_options", 00:03:58.419 "params": { 00:03:58.419 "impl_name": "ssl", 00:03:58.419 "recv_buf_size": 4096, 00:03:58.419 "send_buf_size": 4096, 00:03:58.419 "enable_recv_pipe": true, 00:03:58.419 "enable_quickack": false, 00:03:58.419 "enable_placement_id": 0, 00:03:58.419 "enable_zerocopy_send_server": true, 00:03:58.419 "enable_zerocopy_send_client": false, 00:03:58.419 "zerocopy_threshold": 0, 00:03:58.419 "tls_version": 0, 00:03:58.419 "enable_ktls": false 00:03:58.419 } 00:03:58.419 } 00:03:58.419 ] 00:03:58.419 }, 00:03:58.419 { 00:03:58.419 "subsystem": "vmd", 00:03:58.419 "config": [] 00:03:58.419 }, 00:03:58.419 { 00:03:58.419 "subsystem": "accel", 00:03:58.419 "config": [ 00:03:58.419 { 00:03:58.419 "method": "accel_set_options", 00:03:58.419 "params": { 00:03:58.419 "small_cache_size": 128, 00:03:58.419 "large_cache_size": 16, 00:03:58.419 "task_count": 2048, 00:03:58.419 "sequence_count": 2048, 00:03:58.419 "buf_count": 2048 00:03:58.419 } 00:03:58.419 } 00:03:58.419 ] 00:03:58.419 }, 00:03:58.419 { 00:03:58.419 "subsystem": "bdev", 00:03:58.419 "config": [ 00:03:58.419 { 00:03:58.419 "method": "bdev_set_options", 00:03:58.419 "params": { 00:03:58.419 "bdev_io_pool_size": 65535, 00:03:58.419 "bdev_io_cache_size": 256, 00:03:58.419 "bdev_auto_examine": true, 00:03:58.419 "iobuf_small_cache_size": 128, 00:03:58.419 "iobuf_large_cache_size": 16 00:03:58.419 } 00:03:58.419 }, 00:03:58.419 { 00:03:58.419 "method": "bdev_raid_set_options", 00:03:58.419 "params": { 00:03:58.419 "process_window_size_kb": 1024 00:03:58.419 } 00:03:58.419 }, 00:03:58.419 { 00:03:58.419 "method": "bdev_iscsi_set_options", 00:03:58.419 "params": { 00:03:58.419 "timeout_sec": 30 00:03:58.419 } 00:03:58.419 }, 00:03:58.419 { 00:03:58.419 "method": "bdev_nvme_set_options", 00:03:58.419 "params": { 00:03:58.419 "action_on_timeout": "none", 00:03:58.419 "timeout_us": 0, 00:03:58.419 "timeout_admin_us": 0, 00:03:58.419 "keep_alive_timeout_ms": 10000, 00:03:58.419 "arbitration_burst": 0, 00:03:58.419 "low_priority_weight": 0, 00:03:58.419 "medium_priority_weight": 0, 00:03:58.419 "high_priority_weight": 0, 00:03:58.419 "nvme_adminq_poll_period_us": 10000, 00:03:58.419 "nvme_ioq_poll_period_us": 0, 00:03:58.419 "io_queue_requests": 0, 00:03:58.419 "delay_cmd_submit": true, 00:03:58.419 "transport_retry_count": 4, 00:03:58.419 "bdev_retry_count": 3, 00:03:58.419 "transport_ack_timeout": 0, 00:03:58.419 "ctrlr_loss_timeout_sec": 0, 00:03:58.419 "reconnect_delay_sec": 0, 00:03:58.419 "fast_io_fail_timeout_sec": 0, 00:03:58.419 "disable_auto_failback": false, 00:03:58.419 "generate_uuids": false, 00:03:58.419 "transport_tos": 0, 00:03:58.419 "nvme_error_stat": false, 00:03:58.419 "rdma_srq_size": 0, 00:03:58.419 "io_path_stat": false, 00:03:58.419 "allow_accel_sequence": false, 00:03:58.420 "rdma_max_cq_size": 0, 00:03:58.420 "rdma_cm_event_timeout_ms": 0, 00:03:58.420 "dhchap_digests": [ 00:03:58.420 "sha256", 00:03:58.420 "sha384", 00:03:58.420 "sha512" 00:03:58.420 ], 00:03:58.420 "dhchap_dhgroups": [ 00:03:58.420 "null", 00:03:58.420 "ffdhe2048", 00:03:58.420 "ffdhe3072", 00:03:58.420 "ffdhe4096", 00:03:58.420 "ffdhe6144", 00:03:58.420 "ffdhe8192" 00:03:58.420 ] 00:03:58.420 } 00:03:58.420 }, 00:03:58.420 { 00:03:58.420 "method": "bdev_nvme_set_hotplug", 00:03:58.420 "params": { 00:03:58.420 "period_us": 100000, 00:03:58.420 "enable": false 00:03:58.420 } 00:03:58.420 }, 00:03:58.420 { 00:03:58.420 "method": "bdev_wait_for_examine" 00:03:58.420 } 00:03:58.420 ] 00:03:58.420 }, 00:03:58.420 { 00:03:58.420 "subsystem": "scsi", 00:03:58.420 "config": null 00:03:58.420 }, 00:03:58.420 { 00:03:58.420 "subsystem": "scheduler", 00:03:58.420 "config": [ 00:03:58.420 { 00:03:58.420 "method": "framework_set_scheduler", 00:03:58.420 "params": { 00:03:58.420 "name": "static" 00:03:58.420 } 00:03:58.420 } 00:03:58.420 ] 00:03:58.420 }, 00:03:58.420 { 00:03:58.420 "subsystem": "vhost_scsi", 00:03:58.420 "config": [] 00:03:58.420 }, 00:03:58.420 { 00:03:58.420 "subsystem": "vhost_blk", 00:03:58.420 "config": [] 00:03:58.420 }, 00:03:58.420 { 00:03:58.420 "subsystem": "ublk", 00:03:58.420 "config": [] 00:03:58.420 }, 00:03:58.420 { 00:03:58.420 "subsystem": "nbd", 00:03:58.420 "config": [] 00:03:58.420 }, 00:03:58.420 { 00:03:58.420 "subsystem": "nvmf", 00:03:58.420 "config": [ 00:03:58.420 { 00:03:58.420 "method": "nvmf_set_config", 00:03:58.420 "params": { 00:03:58.420 "discovery_filter": "match_any", 00:03:58.420 "admin_cmd_passthru": { 00:03:58.420 "identify_ctrlr": false 00:03:58.420 } 00:03:58.420 } 00:03:58.420 }, 00:03:58.420 { 00:03:58.420 "method": "nvmf_set_max_subsystems", 00:03:58.420 "params": { 00:03:58.420 "max_subsystems": 1024 00:03:58.420 } 00:03:58.420 }, 00:03:58.420 { 00:03:58.420 "method": "nvmf_set_crdt", 00:03:58.420 "params": { 00:03:58.420 "crdt1": 0, 00:03:58.420 "crdt2": 0, 00:03:58.420 "crdt3": 0 00:03:58.420 } 00:03:58.420 }, 00:03:58.420 { 00:03:58.420 "method": "nvmf_create_transport", 00:03:58.420 "params": { 00:03:58.420 "trtype": "TCP", 00:03:58.420 "max_queue_depth": 128, 00:03:58.420 "max_io_qpairs_per_ctrlr": 127, 00:03:58.420 "in_capsule_data_size": 4096, 00:03:58.420 "max_io_size": 131072, 00:03:58.420 "io_unit_size": 131072, 00:03:58.420 "max_aq_depth": 128, 00:03:58.420 "num_shared_buffers": 511, 00:03:58.420 "buf_cache_size": 4294967295, 00:03:58.420 "dif_insert_or_strip": false, 00:03:58.420 "zcopy": false, 00:03:58.420 "c2h_success": true, 00:03:58.420 "sock_priority": 0, 00:03:58.420 "abort_timeout_sec": 1, 00:03:58.420 "ack_timeout": 0 00:03:58.420 } 00:03:58.420 } 00:03:58.420 ] 00:03:58.420 }, 00:03:58.420 { 00:03:58.420 "subsystem": "iscsi", 00:03:58.420 "config": [ 00:03:58.420 { 00:03:58.420 "method": "iscsi_set_options", 00:03:58.420 "params": { 00:03:58.420 "node_base": "iqn.2016-06.io.spdk", 00:03:58.420 "max_sessions": 128, 00:03:58.420 "max_connections_per_session": 2, 00:03:58.420 "max_queue_depth": 64, 00:03:58.420 "default_time2wait": 2, 00:03:58.420 "default_time2retain": 20, 00:03:58.420 "first_burst_length": 8192, 00:03:58.420 "immediate_data": true, 00:03:58.420 "allow_duplicated_isid": false, 00:03:58.420 "error_recovery_level": 0, 00:03:58.420 "nop_timeout": 60, 00:03:58.420 "nop_in_interval": 30, 00:03:58.420 "disable_chap": false, 00:03:58.420 "require_chap": false, 00:03:58.420 "mutual_chap": false, 00:03:58.420 "chap_group": 0, 00:03:58.420 "max_large_datain_per_connection": 64, 00:03:58.420 "max_r2t_per_connection": 4, 00:03:58.420 "pdu_pool_size": 36864, 00:03:58.420 "immediate_data_pool_size": 16384, 00:03:58.420 "data_out_pool_size": 2048 00:03:58.420 } 00:03:58.420 } 00:03:58.420 ] 00:03:58.420 } 00:03:58.420 ] 00:03:58.420 } 00:03:58.420 14:08:39 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:58.420 14:08:39 -- rpc/skip_rpc.sh@40 -- # killprocess 3058641 00:03:58.420 14:08:39 -- common/autotest_common.sh@936 -- # '[' -z 3058641 ']' 00:03:58.420 14:08:39 -- common/autotest_common.sh@940 -- # kill -0 3058641 00:03:58.420 14:08:39 -- common/autotest_common.sh@941 -- # uname 00:03:58.420 14:08:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:58.420 14:08:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3058641 00:03:58.420 14:08:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:58.420 14:08:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:58.420 14:08:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3058641' 00:03:58.420 killing process with pid 3058641 00:03:58.420 14:08:39 -- common/autotest_common.sh@955 -- # kill 3058641 00:03:58.420 14:08:39 -- common/autotest_common.sh@960 -- # wait 3058641 00:03:58.678 14:08:40 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3058712 00:03:58.678 14:08:40 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:58.678 14:08:40 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:03.941 14:08:45 -- rpc/skip_rpc.sh@50 -- # killprocess 3058712 00:04:03.941 14:08:45 -- common/autotest_common.sh@936 -- # '[' -z 3058712 ']' 00:04:03.941 14:08:45 -- common/autotest_common.sh@940 -- # kill -0 3058712 00:04:03.941 14:08:45 -- common/autotest_common.sh@941 -- # uname 00:04:03.941 14:08:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:03.941 14:08:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3058712 00:04:03.941 14:08:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:03.941 14:08:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:03.941 14:08:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3058712' 00:04:03.941 killing process with pid 3058712 00:04:03.941 14:08:45 -- common/autotest_common.sh@955 -- # kill 3058712 00:04:03.941 14:08:45 -- common/autotest_common.sh@960 -- # wait 3058712 00:04:03.941 14:08:45 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:04.201 14:08:45 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:04.201 00:04:04.201 real 0m6.360s 00:04:04.201 user 0m6.051s 00:04:04.201 sys 0m0.625s 00:04:04.201 14:08:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:04.201 14:08:45 -- common/autotest_common.sh@10 -- # set +x 00:04:04.201 ************************************ 00:04:04.201 END TEST skip_rpc_with_json 00:04:04.201 ************************************ 00:04:04.201 14:08:45 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:04.201 14:08:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.201 14:08:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.201 14:08:45 -- common/autotest_common.sh@10 -- # set +x 00:04:04.201 ************************************ 00:04:04.201 START TEST skip_rpc_with_delay 00:04:04.201 ************************************ 00:04:04.201 14:08:45 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:04:04.201 14:08:45 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.201 14:08:45 -- common/autotest_common.sh@638 -- # local es=0 00:04:04.201 14:08:45 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.201 14:08:45 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.201 14:08:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:04.201 14:08:45 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.201 14:08:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:04.201 14:08:45 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.201 14:08:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:04.201 14:08:45 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.201 14:08:45 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:04.201 14:08:45 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.201 [2024-04-26 14:08:45.723076] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:04.201 [2024-04-26 14:08:45.723231] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:04.201 14:08:45 -- common/autotest_common.sh@641 -- # es=1 00:04:04.201 14:08:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:04.201 14:08:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:04.201 14:08:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:04.201 00:04:04.201 real 0m0.078s 00:04:04.201 user 0m0.045s 00:04:04.201 sys 0m0.032s 00:04:04.201 14:08:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:04.201 14:08:45 -- common/autotest_common.sh@10 -- # set +x 00:04:04.201 ************************************ 00:04:04.201 END TEST skip_rpc_with_delay 00:04:04.201 ************************************ 00:04:04.201 14:08:45 -- rpc/skip_rpc.sh@77 -- # uname 00:04:04.201 14:08:45 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:04.201 14:08:45 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:04.201 14:08:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.202 14:08:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.202 14:08:45 -- common/autotest_common.sh@10 -- # set +x 00:04:04.480 ************************************ 00:04:04.480 START TEST exit_on_failed_rpc_init 00:04:04.480 ************************************ 00:04:04.480 14:08:45 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:04:04.480 14:08:45 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3059363 00:04:04.480 14:08:45 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:04.480 14:08:45 -- rpc/skip_rpc.sh@63 -- # waitforlisten 3059363 00:04:04.480 14:08:45 -- common/autotest_common.sh@817 -- # '[' -z 3059363 ']' 00:04:04.480 14:08:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.480 14:08:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:04.480 14:08:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.480 14:08:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:04.480 14:08:45 -- common/autotest_common.sh@10 -- # set +x 00:04:04.480 [2024-04-26 14:08:45.935251] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:04:04.480 [2024-04-26 14:08:45.935366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3059363 ] 00:04:04.480 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.480 [2024-04-26 14:08:45.997715] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.738 [2024-04-26 14:08:46.114497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.996 14:08:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:04.996 14:08:46 -- common/autotest_common.sh@850 -- # return 0 00:04:04.996 14:08:46 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:04.996 14:08:46 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:04.996 14:08:46 -- common/autotest_common.sh@638 -- # local es=0 00:04:04.996 14:08:46 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:04.996 14:08:46 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.996 14:08:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:04.996 14:08:46 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.996 14:08:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:04.996 14:08:46 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.996 14:08:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:04.996 14:08:46 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.996 14:08:46 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:04.996 14:08:46 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:04.996 [2024-04-26 14:08:46.408664] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:04:04.997 [2024-04-26 14:08:46.408753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3059379 ] 00:04:04.997 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.997 [2024-04-26 14:08:46.468225] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.255 [2024-04-26 14:08:46.586895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:05.255 [2024-04-26 14:08:46.587011] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:05.255 [2024-04-26 14:08:46.587037] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:05.255 [2024-04-26 14:08:46.587051] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:05.255 14:08:46 -- common/autotest_common.sh@641 -- # es=234 00:04:05.255 14:08:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:05.255 14:08:46 -- common/autotest_common.sh@650 -- # es=106 00:04:05.255 14:08:46 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:05.255 14:08:46 -- common/autotest_common.sh@658 -- # es=1 00:04:05.255 14:08:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:05.255 14:08:46 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:05.255 14:08:46 -- rpc/skip_rpc.sh@70 -- # killprocess 3059363 00:04:05.256 14:08:46 -- common/autotest_common.sh@936 -- # '[' -z 3059363 ']' 00:04:05.256 14:08:46 -- common/autotest_common.sh@940 -- # kill -0 3059363 00:04:05.256 14:08:46 -- common/autotest_common.sh@941 -- # uname 00:04:05.256 14:08:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:05.256 14:08:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3059363 00:04:05.256 14:08:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:05.256 14:08:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:05.256 14:08:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3059363' 00:04:05.256 killing process with pid 3059363 00:04:05.256 14:08:46 -- common/autotest_common.sh@955 -- # kill 3059363 00:04:05.256 14:08:46 -- common/autotest_common.sh@960 -- # wait 3059363 00:04:05.514 00:04:05.514 real 0m1.182s 00:04:05.514 user 0m1.426s 00:04:05.514 sys 0m0.405s 00:04:05.514 14:08:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:05.514 14:08:47 -- common/autotest_common.sh@10 -- # set +x 00:04:05.514 ************************************ 00:04:05.514 END TEST exit_on_failed_rpc_init 00:04:05.514 ************************************ 00:04:05.772 14:08:47 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:05.772 00:04:05.772 real 0m13.593s 00:04:05.772 user 0m12.806s 00:04:05.772 sys 0m1.698s 00:04:05.772 14:08:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:05.772 14:08:47 -- common/autotest_common.sh@10 -- # set +x 00:04:05.772 ************************************ 00:04:05.772 END TEST skip_rpc 00:04:05.772 ************************************ 00:04:05.772 14:08:47 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:05.772 14:08:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.772 14:08:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.772 14:08:47 -- common/autotest_common.sh@10 -- # set +x 00:04:05.772 ************************************ 00:04:05.772 START TEST rpc_client 00:04:05.772 ************************************ 00:04:05.772 14:08:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:05.772 * Looking for test storage... 00:04:05.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:05.772 14:08:47 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:05.772 OK 00:04:05.772 14:08:47 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:05.772 00:04:05.772 real 0m0.070s 00:04:05.772 user 0m0.032s 00:04:05.772 sys 0m0.042s 00:04:05.772 14:08:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:05.772 14:08:47 -- common/autotest_common.sh@10 -- # set +x 00:04:05.772 ************************************ 00:04:05.772 END TEST rpc_client 00:04:05.772 ************************************ 00:04:05.772 14:08:47 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:05.772 14:08:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.772 14:08:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.772 14:08:47 -- common/autotest_common.sh@10 -- # set +x 00:04:06.031 ************************************ 00:04:06.031 START TEST json_config 00:04:06.031 ************************************ 00:04:06.031 14:08:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:06.031 14:08:47 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:06.031 14:08:47 -- nvmf/common.sh@7 -- # uname -s 00:04:06.031 14:08:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:06.031 14:08:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:06.031 14:08:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:06.031 14:08:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:06.031 14:08:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:06.031 14:08:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:06.031 14:08:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:06.031 14:08:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:06.031 14:08:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:06.031 14:08:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:06.031 14:08:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:04:06.031 14:08:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:04:06.031 14:08:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:06.031 14:08:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:06.031 14:08:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:06.031 14:08:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:06.031 14:08:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:06.031 14:08:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:06.031 14:08:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:06.031 14:08:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:06.031 14:08:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.031 14:08:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.031 14:08:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.031 14:08:47 -- paths/export.sh@5 -- # export PATH 00:04:06.031 14:08:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.031 14:08:47 -- nvmf/common.sh@47 -- # : 0 00:04:06.031 14:08:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:06.031 14:08:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:06.031 14:08:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:06.031 14:08:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:06.031 14:08:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:06.031 14:08:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:06.031 14:08:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:06.031 14:08:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:06.031 14:08:47 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:06.031 14:08:47 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:06.031 14:08:47 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:06.031 14:08:47 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:06.031 14:08:47 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:06.031 14:08:47 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:06.031 14:08:47 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:06.031 14:08:47 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:06.031 14:08:47 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:06.032 14:08:47 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:06.032 14:08:47 -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:06.032 14:08:47 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:06.032 14:08:47 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:06.032 14:08:47 -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:06.032 14:08:47 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:06.032 14:08:47 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:06.032 INFO: JSON configuration test init 00:04:06.032 14:08:47 -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:06.032 14:08:47 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:06.032 14:08:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:06.032 14:08:47 -- common/autotest_common.sh@10 -- # set +x 00:04:06.032 14:08:47 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:06.032 14:08:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:06.032 14:08:47 -- common/autotest_common.sh@10 -- # set +x 00:04:06.032 14:08:47 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:06.032 14:08:47 -- json_config/common.sh@9 -- # local app=target 00:04:06.032 14:08:47 -- json_config/common.sh@10 -- # shift 00:04:06.032 14:08:47 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:06.032 14:08:47 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:06.032 14:08:47 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:06.032 14:08:47 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.032 14:08:47 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:06.032 14:08:47 -- json_config/common.sh@22 -- # app_pid["$app"]=3059604 00:04:06.032 14:08:47 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:06.032 14:08:47 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:06.032 Waiting for target to run... 00:04:06.032 14:08:47 -- json_config/common.sh@25 -- # waitforlisten 3059604 /var/tmp/spdk_tgt.sock 00:04:06.032 14:08:47 -- common/autotest_common.sh@817 -- # '[' -z 3059604 ']' 00:04:06.032 14:08:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:06.032 14:08:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:06.032 14:08:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:06.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:06.032 14:08:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:06.032 14:08:47 -- common/autotest_common.sh@10 -- # set +x 00:04:06.032 [2024-04-26 14:08:47.530692] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:04:06.032 [2024-04-26 14:08:47.530780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3059604 ] 00:04:06.032 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.290 [2024-04-26 14:08:47.839039] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.549 [2024-04-26 14:08:47.931651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.114 14:08:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:07.114 14:08:48 -- common/autotest_common.sh@850 -- # return 0 00:04:07.114 14:08:48 -- json_config/common.sh@26 -- # echo '' 00:04:07.114 00:04:07.115 14:08:48 -- json_config/json_config.sh@269 -- # create_accel_config 00:04:07.115 14:08:48 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:07.115 14:08:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:07.115 14:08:48 -- common/autotest_common.sh@10 -- # set +x 00:04:07.115 14:08:48 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:07.115 14:08:48 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:07.115 14:08:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:07.115 14:08:48 -- common/autotest_common.sh@10 -- # set +x 00:04:07.115 14:08:48 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:07.115 14:08:48 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:07.115 14:08:48 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:10.403 14:08:51 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:10.403 14:08:51 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:10.403 14:08:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:10.403 14:08:51 -- common/autotest_common.sh@10 -- # set +x 00:04:10.403 14:08:51 -- json_config/json_config.sh@45 -- # local ret=0 00:04:10.403 14:08:51 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:10.403 14:08:51 -- json_config/json_config.sh@46 -- # local enabled_types 00:04:10.403 14:08:51 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:10.403 14:08:51 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:10.403 14:08:51 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:10.662 14:08:52 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:10.662 14:08:52 -- json_config/json_config.sh@48 -- # local get_types 00:04:10.662 14:08:52 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:10.662 14:08:52 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:10.662 14:08:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:10.662 14:08:52 -- common/autotest_common.sh@10 -- # set +x 00:04:10.662 14:08:52 -- json_config/json_config.sh@55 -- # return 0 00:04:10.662 14:08:52 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:10.662 14:08:52 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:10.662 14:08:52 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:10.662 14:08:52 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:10.662 14:08:52 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:10.662 14:08:52 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:10.662 14:08:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:10.662 14:08:52 -- common/autotest_common.sh@10 -- # set +x 00:04:10.662 14:08:52 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:10.662 14:08:52 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:10.662 14:08:52 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:10.662 14:08:52 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:10.662 14:08:52 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:10.920 MallocForNvmf0 00:04:10.920 14:08:52 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:10.920 14:08:52 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:11.178 MallocForNvmf1 00:04:11.178 14:08:52 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:11.178 14:08:52 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:11.436 [2024-04-26 14:08:52.952785] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:11.436 14:08:52 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:11.436 14:08:52 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:11.695 14:08:53 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:11.695 14:08:53 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:12.260 14:08:53 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:12.260 14:08:53 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:12.518 14:08:53 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:12.518 14:08:53 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:12.776 [2024-04-26 14:08:54.120427] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:12.776 14:08:54 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:12.776 14:08:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:12.776 14:08:54 -- common/autotest_common.sh@10 -- # set +x 00:04:12.776 14:08:54 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:12.776 14:08:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:12.776 14:08:54 -- common/autotest_common.sh@10 -- # set +x 00:04:12.776 14:08:54 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:12.776 14:08:54 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:12.776 14:08:54 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:13.036 MallocBdevForConfigChangeCheck 00:04:13.036 14:08:54 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:13.036 14:08:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:13.036 14:08:54 -- common/autotest_common.sh@10 -- # set +x 00:04:13.036 14:08:54 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:13.036 14:08:54 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:13.602 14:08:54 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:13.602 INFO: shutting down applications... 00:04:13.602 14:08:54 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:13.602 14:08:54 -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:13.602 14:08:54 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:13.602 14:08:54 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:15.504 Calling clear_iscsi_subsystem 00:04:15.504 Calling clear_nvmf_subsystem 00:04:15.504 Calling clear_nbd_subsystem 00:04:15.504 Calling clear_ublk_subsystem 00:04:15.504 Calling clear_vhost_blk_subsystem 00:04:15.504 Calling clear_vhost_scsi_subsystem 00:04:15.504 Calling clear_bdev_subsystem 00:04:15.504 14:08:56 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:15.504 14:08:56 -- json_config/json_config.sh@343 -- # count=100 00:04:15.504 14:08:56 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:15.504 14:08:56 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:15.504 14:08:56 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:15.504 14:08:56 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:15.504 14:08:56 -- json_config/json_config.sh@345 -- # break 00:04:15.504 14:08:56 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:15.504 14:08:56 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:15.504 14:08:56 -- json_config/common.sh@31 -- # local app=target 00:04:15.504 14:08:56 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:15.504 14:08:56 -- json_config/common.sh@35 -- # [[ -n 3059604 ]] 00:04:15.504 14:08:56 -- json_config/common.sh@38 -- # kill -SIGINT 3059604 00:04:15.504 14:08:56 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:15.504 14:08:56 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:15.504 14:08:56 -- json_config/common.sh@41 -- # kill -0 3059604 00:04:15.504 14:08:56 -- json_config/common.sh@45 -- # sleep 0.5 00:04:16.071 14:08:57 -- json_config/common.sh@40 -- # (( i++ )) 00:04:16.071 14:08:57 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:16.071 14:08:57 -- json_config/common.sh@41 -- # kill -0 3059604 00:04:16.071 14:08:57 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:16.071 14:08:57 -- json_config/common.sh@43 -- # break 00:04:16.071 14:08:57 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:16.071 14:08:57 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:16.071 SPDK target shutdown done 00:04:16.071 14:08:57 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:16.071 INFO: relaunching applications... 00:04:16.071 14:08:57 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:16.071 14:08:57 -- json_config/common.sh@9 -- # local app=target 00:04:16.071 14:08:57 -- json_config/common.sh@10 -- # shift 00:04:16.071 14:08:57 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:16.071 14:08:57 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:16.071 14:08:57 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:16.071 14:08:57 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:16.071 14:08:57 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:16.071 14:08:57 -- json_config/common.sh@22 -- # app_pid["$app"]=3060636 00:04:16.071 14:08:57 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:16.071 14:08:57 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:16.071 Waiting for target to run... 00:04:16.071 14:08:57 -- json_config/common.sh@25 -- # waitforlisten 3060636 /var/tmp/spdk_tgt.sock 00:04:16.071 14:08:57 -- common/autotest_common.sh@817 -- # '[' -z 3060636 ']' 00:04:16.071 14:08:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:16.072 14:08:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:16.072 14:08:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:16.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:16.072 14:08:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:16.072 14:08:57 -- common/autotest_common.sh@10 -- # set +x 00:04:16.072 [2024-04-26 14:08:57.549300] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:04:16.072 [2024-04-26 14:08:57.549390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3060636 ] 00:04:16.072 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.330 [2024-04-26 14:08:57.844960] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.589 [2024-04-26 14:08:57.938297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.876 [2024-04-26 14:09:00.947643] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:19.876 [2024-04-26 14:09:00.979992] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:19.876 14:09:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:19.876 14:09:01 -- common/autotest_common.sh@850 -- # return 0 00:04:19.876 14:09:01 -- json_config/common.sh@26 -- # echo '' 00:04:19.876 00:04:19.876 14:09:01 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:19.876 14:09:01 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:19.876 INFO: Checking if target configuration is the same... 00:04:19.876 14:09:01 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.876 14:09:01 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:19.876 14:09:01 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.876 + '[' 2 -ne 2 ']' 00:04:19.876 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:19.876 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:19.876 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:19.876 +++ basename /dev/fd/62 00:04:19.876 ++ mktemp /tmp/62.XXX 00:04:19.876 + tmp_file_1=/tmp/62.Sb8 00:04:19.876 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:19.876 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:19.876 + tmp_file_2=/tmp/spdk_tgt_config.json.VAB 00:04:19.876 + ret=0 00:04:19.876 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:19.876 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:20.134 + diff -u /tmp/62.Sb8 /tmp/spdk_tgt_config.json.VAB 00:04:20.134 + echo 'INFO: JSON config files are the same' 00:04:20.134 INFO: JSON config files are the same 00:04:20.134 + rm /tmp/62.Sb8 /tmp/spdk_tgt_config.json.VAB 00:04:20.134 + exit 0 00:04:20.134 14:09:01 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:20.134 14:09:01 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:20.134 INFO: changing configuration and checking if this can be detected... 00:04:20.134 14:09:01 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:20.134 14:09:01 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:20.392 14:09:01 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.392 14:09:01 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:20.392 14:09:01 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:20.392 + '[' 2 -ne 2 ']' 00:04:20.392 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:20.392 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:20.392 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:20.392 +++ basename /dev/fd/62 00:04:20.392 ++ mktemp /tmp/62.XXX 00:04:20.392 + tmp_file_1=/tmp/62.ryL 00:04:20.392 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:20.393 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:20.393 + tmp_file_2=/tmp/spdk_tgt_config.json.meQ 00:04:20.393 + ret=0 00:04:20.393 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:20.651 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:20.909 + diff -u /tmp/62.ryL /tmp/spdk_tgt_config.json.meQ 00:04:20.909 + ret=1 00:04:20.909 + echo '=== Start of file: /tmp/62.ryL ===' 00:04:20.909 + cat /tmp/62.ryL 00:04:20.909 + echo '=== End of file: /tmp/62.ryL ===' 00:04:20.909 + echo '' 00:04:20.909 + echo '=== Start of file: /tmp/spdk_tgt_config.json.meQ ===' 00:04:20.909 + cat /tmp/spdk_tgt_config.json.meQ 00:04:20.909 + echo '=== End of file: /tmp/spdk_tgt_config.json.meQ ===' 00:04:20.909 + echo '' 00:04:20.909 + rm /tmp/62.ryL /tmp/spdk_tgt_config.json.meQ 00:04:20.909 + exit 1 00:04:20.909 14:09:02 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:20.909 INFO: configuration change detected. 00:04:20.909 14:09:02 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:20.909 14:09:02 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:20.909 14:09:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:20.909 14:09:02 -- common/autotest_common.sh@10 -- # set +x 00:04:20.909 14:09:02 -- json_config/json_config.sh@307 -- # local ret=0 00:04:20.909 14:09:02 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:20.909 14:09:02 -- json_config/json_config.sh@317 -- # [[ -n 3060636 ]] 00:04:20.909 14:09:02 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:20.909 14:09:02 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:20.909 14:09:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:20.909 14:09:02 -- common/autotest_common.sh@10 -- # set +x 00:04:20.909 14:09:02 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:20.909 14:09:02 -- json_config/json_config.sh@193 -- # uname -s 00:04:20.909 14:09:02 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:20.909 14:09:02 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:20.909 14:09:02 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:20.909 14:09:02 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:20.909 14:09:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:20.909 14:09:02 -- common/autotest_common.sh@10 -- # set +x 00:04:20.909 14:09:02 -- json_config/json_config.sh@323 -- # killprocess 3060636 00:04:20.909 14:09:02 -- common/autotest_common.sh@936 -- # '[' -z 3060636 ']' 00:04:20.909 14:09:02 -- common/autotest_common.sh@940 -- # kill -0 3060636 00:04:20.909 14:09:02 -- common/autotest_common.sh@941 -- # uname 00:04:20.909 14:09:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:20.909 14:09:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3060636 00:04:20.909 14:09:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:20.909 14:09:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:20.909 14:09:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3060636' 00:04:20.909 killing process with pid 3060636 00:04:20.909 14:09:02 -- common/autotest_common.sh@955 -- # kill 3060636 00:04:20.909 14:09:02 -- common/autotest_common.sh@960 -- # wait 3060636 00:04:22.809 14:09:03 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.809 14:09:03 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:22.809 14:09:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:22.809 14:09:03 -- common/autotest_common.sh@10 -- # set +x 00:04:22.809 14:09:03 -- json_config/json_config.sh@328 -- # return 0 00:04:22.809 14:09:03 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:22.809 INFO: Success 00:04:22.809 00:04:22.809 real 0m16.549s 00:04:22.809 user 0m19.403s 00:04:22.809 sys 0m1.803s 00:04:22.809 14:09:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:22.809 14:09:03 -- common/autotest_common.sh@10 -- # set +x 00:04:22.809 ************************************ 00:04:22.809 END TEST json_config 00:04:22.809 ************************************ 00:04:22.809 14:09:03 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:22.809 14:09:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:22.809 14:09:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.809 14:09:03 -- common/autotest_common.sh@10 -- # set +x 00:04:22.809 ************************************ 00:04:22.809 START TEST json_config_extra_key 00:04:22.809 ************************************ 00:04:22.809 14:09:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:22.809 14:09:04 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:22.809 14:09:04 -- nvmf/common.sh@7 -- # uname -s 00:04:22.809 14:09:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:22.809 14:09:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:22.809 14:09:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:22.809 14:09:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:22.809 14:09:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:22.809 14:09:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:22.810 14:09:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:22.810 14:09:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:22.810 14:09:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:22.810 14:09:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:22.810 14:09:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:04:22.810 14:09:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:04:22.810 14:09:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:22.810 14:09:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:22.810 14:09:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:22.810 14:09:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:22.810 14:09:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:22.810 14:09:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:22.810 14:09:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:22.810 14:09:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:22.810 14:09:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.810 14:09:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.810 14:09:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.810 14:09:04 -- paths/export.sh@5 -- # export PATH 00:04:22.810 14:09:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.810 14:09:04 -- nvmf/common.sh@47 -- # : 0 00:04:22.810 14:09:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:22.810 14:09:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:22.810 14:09:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:22.810 14:09:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:22.810 14:09:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:22.810 14:09:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:22.810 14:09:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:22.810 14:09:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:22.810 14:09:04 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:22.810 14:09:04 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:22.810 14:09:04 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:22.810 14:09:04 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:22.810 14:09:04 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:22.810 14:09:04 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:22.810 14:09:04 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:22.810 14:09:04 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:22.810 14:09:04 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:22.810 14:09:04 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:22.810 14:09:04 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:22.810 INFO: launching applications... 00:04:22.810 14:09:04 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:22.810 14:09:04 -- json_config/common.sh@9 -- # local app=target 00:04:22.810 14:09:04 -- json_config/common.sh@10 -- # shift 00:04:22.810 14:09:04 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:22.810 14:09:04 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:22.810 14:09:04 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:22.810 14:09:04 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.810 14:09:04 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.810 14:09:04 -- json_config/common.sh@22 -- # app_pid["$app"]=3061473 00:04:22.810 14:09:04 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:22.810 14:09:04 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:22.810 Waiting for target to run... 00:04:22.810 14:09:04 -- json_config/common.sh@25 -- # waitforlisten 3061473 /var/tmp/spdk_tgt.sock 00:04:22.810 14:09:04 -- common/autotest_common.sh@817 -- # '[' -z 3061473 ']' 00:04:22.810 14:09:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:22.810 14:09:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:22.810 14:09:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:22.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:22.810 14:09:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:22.810 14:09:04 -- common/autotest_common.sh@10 -- # set +x 00:04:22.810 [2024-04-26 14:09:04.217870] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:04:22.810 [2024-04-26 14:09:04.217990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3061473 ] 00:04:22.810 EAL: No free 2048 kB hugepages reported on node 1 00:04:23.068 [2024-04-26 14:09:04.564154] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.325 [2024-04-26 14:09:04.659742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.890 14:09:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:23.890 14:09:05 -- common/autotest_common.sh@850 -- # return 0 00:04:23.890 14:09:05 -- json_config/common.sh@26 -- # echo '' 00:04:23.890 00:04:23.890 14:09:05 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:23.890 INFO: shutting down applications... 00:04:23.890 14:09:05 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:23.890 14:09:05 -- json_config/common.sh@31 -- # local app=target 00:04:23.890 14:09:05 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:23.890 14:09:05 -- json_config/common.sh@35 -- # [[ -n 3061473 ]] 00:04:23.890 14:09:05 -- json_config/common.sh@38 -- # kill -SIGINT 3061473 00:04:23.890 14:09:05 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:23.890 14:09:05 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.890 14:09:05 -- json_config/common.sh@41 -- # kill -0 3061473 00:04:23.890 14:09:05 -- json_config/common.sh@45 -- # sleep 0.5 00:04:24.456 14:09:05 -- json_config/common.sh@40 -- # (( i++ )) 00:04:24.456 14:09:05 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.456 14:09:05 -- json_config/common.sh@41 -- # kill -0 3061473 00:04:24.456 14:09:05 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:24.456 14:09:05 -- json_config/common.sh@43 -- # break 00:04:24.456 14:09:05 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:24.456 14:09:05 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:24.456 SPDK target shutdown done 00:04:24.456 14:09:05 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:24.456 Success 00:04:24.456 00:04:24.456 real 0m1.640s 00:04:24.456 user 0m1.583s 00:04:24.456 sys 0m0.445s 00:04:24.456 14:09:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:24.456 14:09:05 -- common/autotest_common.sh@10 -- # set +x 00:04:24.456 ************************************ 00:04:24.456 END TEST json_config_extra_key 00:04:24.456 ************************************ 00:04:24.456 14:09:05 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:24.456 14:09:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.456 14:09:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.456 14:09:05 -- common/autotest_common.sh@10 -- # set +x 00:04:24.456 ************************************ 00:04:24.456 START TEST alias_rpc 00:04:24.456 ************************************ 00:04:24.456 14:09:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:24.456 * Looking for test storage... 00:04:24.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:24.456 14:09:05 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:24.456 14:09:05 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3061726 00:04:24.456 14:09:05 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.456 14:09:05 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3061726 00:04:24.456 14:09:05 -- common/autotest_common.sh@817 -- # '[' -z 3061726 ']' 00:04:24.456 14:09:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.456 14:09:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:24.456 14:09:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.456 14:09:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:24.456 14:09:05 -- common/autotest_common.sh@10 -- # set +x 00:04:24.456 [2024-04-26 14:09:05.994958] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:04:24.456 [2024-04-26 14:09:05.995066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3061726 ] 00:04:24.456 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.715 [2024-04-26 14:09:06.055010] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.715 [2024-04-26 14:09:06.169530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.973 14:09:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:24.973 14:09:06 -- common/autotest_common.sh@850 -- # return 0 00:04:24.973 14:09:06 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:25.231 14:09:06 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3061726 00:04:25.231 14:09:06 -- common/autotest_common.sh@936 -- # '[' -z 3061726 ']' 00:04:25.231 14:09:06 -- common/autotest_common.sh@940 -- # kill -0 3061726 00:04:25.231 14:09:06 -- common/autotest_common.sh@941 -- # uname 00:04:25.231 14:09:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:25.231 14:09:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3061726 00:04:25.231 14:09:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:25.231 14:09:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:25.231 14:09:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3061726' 00:04:25.231 killing process with pid 3061726 00:04:25.231 14:09:06 -- common/autotest_common.sh@955 -- # kill 3061726 00:04:25.231 14:09:06 -- common/autotest_common.sh@960 -- # wait 3061726 00:04:25.798 00:04:25.799 real 0m1.196s 00:04:25.799 user 0m1.377s 00:04:25.799 sys 0m0.388s 00:04:25.799 14:09:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:25.799 14:09:07 -- common/autotest_common.sh@10 -- # set +x 00:04:25.799 ************************************ 00:04:25.799 END TEST alias_rpc 00:04:25.799 ************************************ 00:04:25.799 14:09:07 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:25.799 14:09:07 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:25.799 14:09:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.799 14:09:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.799 14:09:07 -- common/autotest_common.sh@10 -- # set +x 00:04:25.799 ************************************ 00:04:25.799 START TEST spdkcli_tcp 00:04:25.799 ************************************ 00:04:25.799 14:09:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:25.799 * Looking for test storage... 00:04:25.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:25.799 14:09:07 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:25.799 14:09:07 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:25.799 14:09:07 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:25.799 14:09:07 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:25.799 14:09:07 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:25.799 14:09:07 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:25.799 14:09:07 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:25.799 14:09:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:25.799 14:09:07 -- common/autotest_common.sh@10 -- # set +x 00:04:25.799 14:09:07 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3061887 00:04:25.799 14:09:07 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:25.799 14:09:07 -- spdkcli/tcp.sh@27 -- # waitforlisten 3061887 00:04:25.799 14:09:07 -- common/autotest_common.sh@817 -- # '[' -z 3061887 ']' 00:04:25.799 14:09:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.799 14:09:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:25.799 14:09:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.799 14:09:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:25.799 14:09:07 -- common/autotest_common.sh@10 -- # set +x 00:04:25.799 [2024-04-26 14:09:07.330698] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:04:25.799 [2024-04-26 14:09:07.330815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3061887 ] 00:04:25.799 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.057 [2024-04-26 14:09:07.393735] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.057 [2024-04-26 14:09:07.512656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.057 [2024-04-26 14:09:07.512673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.315 14:09:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:26.315 14:09:07 -- common/autotest_common.sh@850 -- # return 0 00:04:26.315 14:09:07 -- spdkcli/tcp.sh@31 -- # socat_pid=3061902 00:04:26.315 14:09:07 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:26.315 14:09:07 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:26.575 [ 00:04:26.575 "bdev_malloc_delete", 00:04:26.575 "bdev_malloc_create", 00:04:26.575 "bdev_null_resize", 00:04:26.575 "bdev_null_delete", 00:04:26.575 "bdev_null_create", 00:04:26.575 "bdev_nvme_cuse_unregister", 00:04:26.575 "bdev_nvme_cuse_register", 00:04:26.575 "bdev_opal_new_user", 00:04:26.575 "bdev_opal_set_lock_state", 00:04:26.575 "bdev_opal_delete", 00:04:26.575 "bdev_opal_get_info", 00:04:26.575 "bdev_opal_create", 00:04:26.575 "bdev_nvme_opal_revert", 00:04:26.575 "bdev_nvme_opal_init", 00:04:26.575 "bdev_nvme_send_cmd", 00:04:26.575 "bdev_nvme_get_path_iostat", 00:04:26.575 "bdev_nvme_get_mdns_discovery_info", 00:04:26.575 "bdev_nvme_stop_mdns_discovery", 00:04:26.575 "bdev_nvme_start_mdns_discovery", 00:04:26.575 "bdev_nvme_set_multipath_policy", 00:04:26.575 "bdev_nvme_set_preferred_path", 00:04:26.575 "bdev_nvme_get_io_paths", 00:04:26.575 "bdev_nvme_remove_error_injection", 00:04:26.575 "bdev_nvme_add_error_injection", 00:04:26.575 "bdev_nvme_get_discovery_info", 00:04:26.575 "bdev_nvme_stop_discovery", 00:04:26.575 "bdev_nvme_start_discovery", 00:04:26.575 "bdev_nvme_get_controller_health_info", 00:04:26.575 "bdev_nvme_disable_controller", 00:04:26.575 "bdev_nvme_enable_controller", 00:04:26.575 "bdev_nvme_reset_controller", 00:04:26.575 "bdev_nvme_get_transport_statistics", 00:04:26.575 "bdev_nvme_apply_firmware", 00:04:26.575 "bdev_nvme_detach_controller", 00:04:26.575 "bdev_nvme_get_controllers", 00:04:26.575 "bdev_nvme_attach_controller", 00:04:26.575 "bdev_nvme_set_hotplug", 00:04:26.575 "bdev_nvme_set_options", 00:04:26.575 "bdev_passthru_delete", 00:04:26.575 "bdev_passthru_create", 00:04:26.575 "bdev_lvol_grow_lvstore", 00:04:26.575 "bdev_lvol_get_lvols", 00:04:26.575 "bdev_lvol_get_lvstores", 00:04:26.575 "bdev_lvol_delete", 00:04:26.575 "bdev_lvol_set_read_only", 00:04:26.575 "bdev_lvol_resize", 00:04:26.575 "bdev_lvol_decouple_parent", 00:04:26.575 "bdev_lvol_inflate", 00:04:26.575 "bdev_lvol_rename", 00:04:26.575 "bdev_lvol_clone_bdev", 00:04:26.575 "bdev_lvol_clone", 00:04:26.575 "bdev_lvol_snapshot", 00:04:26.575 "bdev_lvol_create", 00:04:26.575 "bdev_lvol_delete_lvstore", 00:04:26.575 "bdev_lvol_rename_lvstore", 00:04:26.575 "bdev_lvol_create_lvstore", 00:04:26.575 "bdev_raid_set_options", 00:04:26.575 "bdev_raid_remove_base_bdev", 00:04:26.575 "bdev_raid_add_base_bdev", 00:04:26.575 "bdev_raid_delete", 00:04:26.575 "bdev_raid_create", 00:04:26.575 "bdev_raid_get_bdevs", 00:04:26.575 "bdev_error_inject_error", 00:04:26.575 "bdev_error_delete", 00:04:26.575 "bdev_error_create", 00:04:26.575 "bdev_split_delete", 00:04:26.575 "bdev_split_create", 00:04:26.575 "bdev_delay_delete", 00:04:26.575 "bdev_delay_create", 00:04:26.575 "bdev_delay_update_latency", 00:04:26.575 "bdev_zone_block_delete", 00:04:26.575 "bdev_zone_block_create", 00:04:26.576 "blobfs_create", 00:04:26.576 "blobfs_detect", 00:04:26.576 "blobfs_set_cache_size", 00:04:26.576 "bdev_aio_delete", 00:04:26.576 "bdev_aio_rescan", 00:04:26.576 "bdev_aio_create", 00:04:26.576 "bdev_ftl_set_property", 00:04:26.576 "bdev_ftl_get_properties", 00:04:26.576 "bdev_ftl_get_stats", 00:04:26.576 "bdev_ftl_unmap", 00:04:26.576 "bdev_ftl_unload", 00:04:26.576 "bdev_ftl_delete", 00:04:26.576 "bdev_ftl_load", 00:04:26.576 "bdev_ftl_create", 00:04:26.576 "bdev_virtio_attach_controller", 00:04:26.576 "bdev_virtio_scsi_get_devices", 00:04:26.576 "bdev_virtio_detach_controller", 00:04:26.576 "bdev_virtio_blk_set_hotplug", 00:04:26.576 "bdev_iscsi_delete", 00:04:26.576 "bdev_iscsi_create", 00:04:26.576 "bdev_iscsi_set_options", 00:04:26.576 "accel_error_inject_error", 00:04:26.576 "ioat_scan_accel_module", 00:04:26.576 "dsa_scan_accel_module", 00:04:26.576 "iaa_scan_accel_module", 00:04:26.576 "vfu_virtio_create_scsi_endpoint", 00:04:26.576 "vfu_virtio_scsi_remove_target", 00:04:26.576 "vfu_virtio_scsi_add_target", 00:04:26.576 "vfu_virtio_create_blk_endpoint", 00:04:26.576 "vfu_virtio_delete_endpoint", 00:04:26.576 "keyring_file_remove_key", 00:04:26.576 "keyring_file_add_key", 00:04:26.576 "iscsi_set_options", 00:04:26.576 "iscsi_get_auth_groups", 00:04:26.576 "iscsi_auth_group_remove_secret", 00:04:26.576 "iscsi_auth_group_add_secret", 00:04:26.576 "iscsi_delete_auth_group", 00:04:26.576 "iscsi_create_auth_group", 00:04:26.576 "iscsi_set_discovery_auth", 00:04:26.576 "iscsi_get_options", 00:04:26.576 "iscsi_target_node_request_logout", 00:04:26.576 "iscsi_target_node_set_redirect", 00:04:26.576 "iscsi_target_node_set_auth", 00:04:26.576 "iscsi_target_node_add_lun", 00:04:26.576 "iscsi_get_stats", 00:04:26.576 "iscsi_get_connections", 00:04:26.576 "iscsi_portal_group_set_auth", 00:04:26.576 "iscsi_start_portal_group", 00:04:26.576 "iscsi_delete_portal_group", 00:04:26.576 "iscsi_create_portal_group", 00:04:26.576 "iscsi_get_portal_groups", 00:04:26.576 "iscsi_delete_target_node", 00:04:26.576 "iscsi_target_node_remove_pg_ig_maps", 00:04:26.576 "iscsi_target_node_add_pg_ig_maps", 00:04:26.576 "iscsi_create_target_node", 00:04:26.576 "iscsi_get_target_nodes", 00:04:26.576 "iscsi_delete_initiator_group", 00:04:26.576 "iscsi_initiator_group_remove_initiators", 00:04:26.576 "iscsi_initiator_group_add_initiators", 00:04:26.576 "iscsi_create_initiator_group", 00:04:26.576 "iscsi_get_initiator_groups", 00:04:26.576 "nvmf_set_crdt", 00:04:26.576 "nvmf_set_config", 00:04:26.576 "nvmf_set_max_subsystems", 00:04:26.576 "nvmf_subsystem_get_listeners", 00:04:26.576 "nvmf_subsystem_get_qpairs", 00:04:26.576 "nvmf_subsystem_get_controllers", 00:04:26.576 "nvmf_get_stats", 00:04:26.576 "nvmf_get_transports", 00:04:26.576 "nvmf_create_transport", 00:04:26.576 "nvmf_get_targets", 00:04:26.576 "nvmf_delete_target", 00:04:26.576 "nvmf_create_target", 00:04:26.576 "nvmf_subsystem_allow_any_host", 00:04:26.576 "nvmf_subsystem_remove_host", 00:04:26.576 "nvmf_subsystem_add_host", 00:04:26.576 "nvmf_ns_remove_host", 00:04:26.576 "nvmf_ns_add_host", 00:04:26.576 "nvmf_subsystem_remove_ns", 00:04:26.576 "nvmf_subsystem_add_ns", 00:04:26.576 "nvmf_subsystem_listener_set_ana_state", 00:04:26.576 "nvmf_discovery_get_referrals", 00:04:26.576 "nvmf_discovery_remove_referral", 00:04:26.576 "nvmf_discovery_add_referral", 00:04:26.576 "nvmf_subsystem_remove_listener", 00:04:26.576 "nvmf_subsystem_add_listener", 00:04:26.576 "nvmf_delete_subsystem", 00:04:26.576 "nvmf_create_subsystem", 00:04:26.576 "nvmf_get_subsystems", 00:04:26.576 "env_dpdk_get_mem_stats", 00:04:26.576 "nbd_get_disks", 00:04:26.576 "nbd_stop_disk", 00:04:26.576 "nbd_start_disk", 00:04:26.576 "ublk_recover_disk", 00:04:26.576 "ublk_get_disks", 00:04:26.576 "ublk_stop_disk", 00:04:26.576 "ublk_start_disk", 00:04:26.576 "ublk_destroy_target", 00:04:26.576 "ublk_create_target", 00:04:26.576 "virtio_blk_create_transport", 00:04:26.576 "virtio_blk_get_transports", 00:04:26.576 "vhost_controller_set_coalescing", 00:04:26.576 "vhost_get_controllers", 00:04:26.576 "vhost_delete_controller", 00:04:26.576 "vhost_create_blk_controller", 00:04:26.576 "vhost_scsi_controller_remove_target", 00:04:26.576 "vhost_scsi_controller_add_target", 00:04:26.576 "vhost_start_scsi_controller", 00:04:26.576 "vhost_create_scsi_controller", 00:04:26.576 "thread_set_cpumask", 00:04:26.576 "framework_get_scheduler", 00:04:26.576 "framework_set_scheduler", 00:04:26.576 "framework_get_reactors", 00:04:26.576 "thread_get_io_channels", 00:04:26.576 "thread_get_pollers", 00:04:26.576 "thread_get_stats", 00:04:26.576 "framework_monitor_context_switch", 00:04:26.576 "spdk_kill_instance", 00:04:26.576 "log_enable_timestamps", 00:04:26.576 "log_get_flags", 00:04:26.576 "log_clear_flag", 00:04:26.576 "log_set_flag", 00:04:26.576 "log_get_level", 00:04:26.576 "log_set_level", 00:04:26.576 "log_get_print_level", 00:04:26.576 "log_set_print_level", 00:04:26.576 "framework_enable_cpumask_locks", 00:04:26.576 "framework_disable_cpumask_locks", 00:04:26.576 "framework_wait_init", 00:04:26.576 "framework_start_init", 00:04:26.576 "scsi_get_devices", 00:04:26.576 "bdev_get_histogram", 00:04:26.576 "bdev_enable_histogram", 00:04:26.576 "bdev_set_qos_limit", 00:04:26.576 "bdev_set_qd_sampling_period", 00:04:26.576 "bdev_get_bdevs", 00:04:26.576 "bdev_reset_iostat", 00:04:26.576 "bdev_get_iostat", 00:04:26.576 "bdev_examine", 00:04:26.576 "bdev_wait_for_examine", 00:04:26.576 "bdev_set_options", 00:04:26.576 "notify_get_notifications", 00:04:26.576 "notify_get_types", 00:04:26.576 "accel_get_stats", 00:04:26.576 "accel_set_options", 00:04:26.576 "accel_set_driver", 00:04:26.576 "accel_crypto_key_destroy", 00:04:26.576 "accel_crypto_keys_get", 00:04:26.576 "accel_crypto_key_create", 00:04:26.576 "accel_assign_opc", 00:04:26.576 "accel_get_module_info", 00:04:26.576 "accel_get_opc_assignments", 00:04:26.576 "vmd_rescan", 00:04:26.576 "vmd_remove_device", 00:04:26.576 "vmd_enable", 00:04:26.576 "sock_set_default_impl", 00:04:26.576 "sock_impl_set_options", 00:04:26.576 "sock_impl_get_options", 00:04:26.576 "iobuf_get_stats", 00:04:26.576 "iobuf_set_options", 00:04:26.576 "keyring_get_keys", 00:04:26.576 "framework_get_pci_devices", 00:04:26.576 "framework_get_config", 00:04:26.576 "framework_get_subsystems", 00:04:26.576 "vfu_tgt_set_base_path", 00:04:26.576 "trace_get_info", 00:04:26.576 "trace_get_tpoint_group_mask", 00:04:26.576 "trace_disable_tpoint_group", 00:04:26.576 "trace_enable_tpoint_group", 00:04:26.576 "trace_clear_tpoint_mask", 00:04:26.576 "trace_set_tpoint_mask", 00:04:26.576 "spdk_get_version", 00:04:26.576 "rpc_get_methods" 00:04:26.576 ] 00:04:26.576 14:09:08 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:26.576 14:09:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:26.576 14:09:08 -- common/autotest_common.sh@10 -- # set +x 00:04:26.576 14:09:08 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:26.576 14:09:08 -- spdkcli/tcp.sh@38 -- # killprocess 3061887 00:04:26.576 14:09:08 -- common/autotest_common.sh@936 -- # '[' -z 3061887 ']' 00:04:26.576 14:09:08 -- common/autotest_common.sh@940 -- # kill -0 3061887 00:04:26.576 14:09:08 -- common/autotest_common.sh@941 -- # uname 00:04:26.576 14:09:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:26.576 14:09:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3061887 00:04:26.576 14:09:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:26.576 14:09:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:26.576 14:09:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3061887' 00:04:26.576 killing process with pid 3061887 00:04:26.576 14:09:08 -- common/autotest_common.sh@955 -- # kill 3061887 00:04:26.576 14:09:08 -- common/autotest_common.sh@960 -- # wait 3061887 00:04:26.867 00:04:26.867 real 0m1.198s 00:04:26.867 user 0m2.167s 00:04:26.867 sys 0m0.407s 00:04:26.867 14:09:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:26.867 14:09:08 -- common/autotest_common.sh@10 -- # set +x 00:04:26.867 ************************************ 00:04:26.867 END TEST spdkcli_tcp 00:04:26.867 ************************************ 00:04:27.150 14:09:08 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:27.150 14:09:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:27.150 14:09:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:27.150 14:09:08 -- common/autotest_common.sh@10 -- # set +x 00:04:27.150 ************************************ 00:04:27.150 START TEST dpdk_mem_utility 00:04:27.150 ************************************ 00:04:27.150 14:09:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:27.150 * Looking for test storage... 00:04:27.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:27.150 14:09:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:27.150 14:09:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3062261 00:04:27.150 14:09:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.150 14:09:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3062261 00:04:27.150 14:09:08 -- common/autotest_common.sh@817 -- # '[' -z 3062261 ']' 00:04:27.150 14:09:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.150 14:09:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:27.150 14:09:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.150 14:09:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:27.150 14:09:08 -- common/autotest_common.sh@10 -- # set +x 00:04:27.150 [2024-04-26 14:09:08.649605] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:04:27.150 [2024-04-26 14:09:08.649719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3062261 ] 00:04:27.150 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.409 [2024-04-26 14:09:08.725605] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.409 [2024-04-26 14:09:08.878467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.345 14:09:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:28.345 14:09:09 -- common/autotest_common.sh@850 -- # return 0 00:04:28.345 14:09:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:28.345 14:09:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:28.345 14:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:28.345 14:09:09 -- common/autotest_common.sh@10 -- # set +x 00:04:28.345 { 00:04:28.345 "filename": "/tmp/spdk_mem_dump.txt" 00:04:28.345 } 00:04:28.345 14:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:28.345 14:09:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:28.345 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:28.345 1 heaps totaling size 814.000000 MiB 00:04:28.345 size: 814.000000 MiB heap id: 0 00:04:28.345 end heaps---------- 00:04:28.345 8 mempools totaling size 598.116089 MiB 00:04:28.346 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:28.346 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:28.346 size: 84.521057 MiB name: bdev_io_3062261 00:04:28.346 size: 51.011292 MiB name: evtpool_3062261 00:04:28.346 size: 50.003479 MiB name: msgpool_3062261 00:04:28.346 size: 21.763794 MiB name: PDU_Pool 00:04:28.346 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:28.346 size: 0.026123 MiB name: Session_Pool 00:04:28.346 end mempools------- 00:04:28.346 6 memzones totaling size 4.142822 MiB 00:04:28.346 size: 1.000366 MiB name: RG_ring_0_3062261 00:04:28.346 size: 1.000366 MiB name: RG_ring_1_3062261 00:04:28.346 size: 1.000366 MiB name: RG_ring_4_3062261 00:04:28.346 size: 1.000366 MiB name: RG_ring_5_3062261 00:04:28.346 size: 0.125366 MiB name: RG_ring_2_3062261 00:04:28.346 size: 0.015991 MiB name: RG_ring_3_3062261 00:04:28.346 end memzones------- 00:04:28.346 14:09:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:28.346 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:28.346 list of free elements. size: 12.519348 MiB 00:04:28.346 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:28.346 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:28.346 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:28.346 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:28.346 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:28.346 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:28.346 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:28.346 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:28.346 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:28.346 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:28.346 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:28.346 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:28.346 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:28.346 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:28.346 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:28.346 list of standard malloc elements. size: 199.218079 MiB 00:04:28.346 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:28.346 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:28.346 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:28.346 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:28.346 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:28.346 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:28.346 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:28.346 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:28.346 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:28.346 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:28.346 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:28.346 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:28.346 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:28.346 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:28.346 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:28.346 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:28.346 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:28.346 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:28.346 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:28.346 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:28.346 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:28.346 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:28.346 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:28.346 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:28.346 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:28.346 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:28.346 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:28.346 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:28.346 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:28.346 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:28.346 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:28.346 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:28.346 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:28.346 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:28.346 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:28.346 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:28.346 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:28.346 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:28.346 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:28.346 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:28.346 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:28.346 list of memzone associated elements. size: 602.262573 MiB 00:04:28.346 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:28.346 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:28.346 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:28.346 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:28.346 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:28.346 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3062261_0 00:04:28.346 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:28.346 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3062261_0 00:04:28.346 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:28.346 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3062261_0 00:04:28.346 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:28.346 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:28.346 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:28.346 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:28.346 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:28.346 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3062261 00:04:28.346 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:28.346 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3062261 00:04:28.346 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:28.346 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3062261 00:04:28.346 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:28.346 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:28.346 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:28.346 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:28.346 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:28.346 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:28.346 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:28.346 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:28.346 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:28.346 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3062261 00:04:28.346 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:28.346 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3062261 00:04:28.346 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:28.346 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3062261 00:04:28.346 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:28.346 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3062261 00:04:28.346 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:28.346 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3062261 00:04:28.346 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:28.346 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:28.346 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:28.346 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:28.346 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:28.346 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:28.346 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:28.346 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3062261 00:04:28.346 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:28.346 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:28.346 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:28.346 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:28.346 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:28.346 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3062261 00:04:28.346 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:28.346 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:28.346 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:28.346 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3062261 00:04:28.346 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:28.346 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3062261 00:04:28.346 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:28.346 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:28.346 14:09:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:28.346 14:09:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3062261 00:04:28.346 14:09:09 -- common/autotest_common.sh@936 -- # '[' -z 3062261 ']' 00:04:28.346 14:09:09 -- common/autotest_common.sh@940 -- # kill -0 3062261 00:04:28.346 14:09:09 -- common/autotest_common.sh@941 -- # uname 00:04:28.346 14:09:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:28.346 14:09:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3062261 00:04:28.346 14:09:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:28.346 14:09:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:28.346 14:09:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3062261' 00:04:28.346 killing process with pid 3062261 00:04:28.346 14:09:09 -- common/autotest_common.sh@955 -- # kill 3062261 00:04:28.346 14:09:09 -- common/autotest_common.sh@960 -- # wait 3062261 00:04:28.604 00:04:28.604 real 0m1.611s 00:04:28.604 user 0m1.857s 00:04:28.604 sys 0m0.446s 00:04:28.604 14:09:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:28.604 14:09:10 -- common/autotest_common.sh@10 -- # set +x 00:04:28.604 ************************************ 00:04:28.604 END TEST dpdk_mem_utility 00:04:28.604 ************************************ 00:04:28.604 14:09:10 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:28.604 14:09:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:28.604 14:09:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:28.604 14:09:10 -- common/autotest_common.sh@10 -- # set +x 00:04:28.862 ************************************ 00:04:28.862 START TEST event 00:04:28.862 ************************************ 00:04:28.862 14:09:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:28.862 * Looking for test storage... 00:04:28.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:28.862 14:09:10 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:28.862 14:09:10 -- bdev/nbd_common.sh@6 -- # set -e 00:04:28.862 14:09:10 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:28.862 14:09:10 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:28.862 14:09:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:28.862 14:09:10 -- common/autotest_common.sh@10 -- # set +x 00:04:28.862 ************************************ 00:04:28.862 START TEST event_perf 00:04:28.862 ************************************ 00:04:28.862 14:09:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:29.121 Running I/O for 1 seconds...[2024-04-26 14:09:10.436744] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:04:29.121 [2024-04-26 14:09:10.436816] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3062857 ] 00:04:29.121 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.121 [2024-04-26 14:09:10.496372] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:29.121 [2024-04-26 14:09:10.617273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.121 [2024-04-26 14:09:10.617327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:29.121 [2024-04-26 14:09:10.617376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:29.121 [2024-04-26 14:09:10.617379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.494 Running I/O for 1 seconds... 00:04:30.494 lcore 0: 234510 00:04:30.494 lcore 1: 234511 00:04:30.494 lcore 2: 234511 00:04:30.494 lcore 3: 234510 00:04:30.494 done. 00:04:30.494 00:04:30.494 real 0m1.305s 00:04:30.494 user 0m4.219s 00:04:30.494 sys 0m0.076s 00:04:30.494 14:09:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:30.494 14:09:11 -- common/autotest_common.sh@10 -- # set +x 00:04:30.494 ************************************ 00:04:30.494 END TEST event_perf 00:04:30.494 ************************************ 00:04:30.494 14:09:11 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:30.494 14:09:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:30.494 14:09:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.494 14:09:11 -- common/autotest_common.sh@10 -- # set +x 00:04:30.494 ************************************ 00:04:30.494 START TEST event_reactor 00:04:30.494 ************************************ 00:04:30.494 14:09:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:30.494 [2024-04-26 14:09:11.886032] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:04:30.494 [2024-04-26 14:09:11.886101] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3062995 ] 00:04:30.494 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.494 [2024-04-26 14:09:11.945146] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.494 [2024-04-26 14:09:12.061730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.866 test_start 00:04:31.866 oneshot 00:04:31.866 tick 100 00:04:31.866 tick 100 00:04:31.866 tick 250 00:04:31.866 tick 100 00:04:31.866 tick 100 00:04:31.866 tick 100 00:04:31.866 tick 250 00:04:31.866 tick 500 00:04:31.866 tick 100 00:04:31.866 tick 100 00:04:31.866 tick 250 00:04:31.866 tick 100 00:04:31.866 tick 100 00:04:31.866 test_end 00:04:31.866 00:04:31.866 real 0m1.300s 00:04:31.866 user 0m1.225s 00:04:31.866 sys 0m0.069s 00:04:31.866 14:09:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:31.866 14:09:13 -- common/autotest_common.sh@10 -- # set +x 00:04:31.866 ************************************ 00:04:31.866 END TEST event_reactor 00:04:31.866 ************************************ 00:04:31.866 14:09:13 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:31.866 14:09:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:31.866 14:09:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:31.866 14:09:13 -- common/autotest_common.sh@10 -- # set +x 00:04:31.866 ************************************ 00:04:31.866 START TEST event_reactor_perf 00:04:31.866 ************************************ 00:04:31.866 14:09:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:31.866 [2024-04-26 14:09:13.324448] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:04:31.866 [2024-04-26 14:09:13.324515] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3063123 ] 00:04:31.866 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.866 [2024-04-26 14:09:13.383697] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.124 [2024-04-26 14:09:13.500144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.058 test_start 00:04:33.058 test_end 00:04:33.058 Performance: 325144 events per second 00:04:33.058 00:04:33.058 real 0m1.298s 00:04:33.058 user 0m1.217s 00:04:33.058 sys 0m0.075s 00:04:33.058 14:09:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:33.058 14:09:14 -- common/autotest_common.sh@10 -- # set +x 00:04:33.058 ************************************ 00:04:33.058 END TEST event_reactor_perf 00:04:33.058 ************************************ 00:04:33.317 14:09:14 -- event/event.sh@49 -- # uname -s 00:04:33.317 14:09:14 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:33.317 14:09:14 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:33.317 14:09:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.317 14:09:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.317 14:09:14 -- common/autotest_common.sh@10 -- # set +x 00:04:33.317 ************************************ 00:04:33.317 START TEST event_scheduler 00:04:33.317 ************************************ 00:04:33.317 14:09:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:33.317 * Looking for test storage... 00:04:33.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:33.317 14:09:14 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:33.317 14:09:14 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3063376 00:04:33.317 14:09:14 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.317 14:09:14 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:33.317 14:09:14 -- scheduler/scheduler.sh@37 -- # waitforlisten 3063376 00:04:33.317 14:09:14 -- common/autotest_common.sh@817 -- # '[' -z 3063376 ']' 00:04:33.317 14:09:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.317 14:09:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:33.317 14:09:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.317 14:09:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:33.317 14:09:14 -- common/autotest_common.sh@10 -- # set +x 00:04:33.317 [2024-04-26 14:09:14.853765] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:04:33.317 [2024-04-26 14:09:14.853871] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3063376 ] 00:04:33.317 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.576 [2024-04-26 14:09:14.919909] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:33.576 [2024-04-26 14:09:15.037871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.576 [2024-04-26 14:09:15.037950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.576 [2024-04-26 14:09:15.038010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:33.576 [2024-04-26 14:09:15.038004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:33.576 14:09:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:33.576 14:09:15 -- common/autotest_common.sh@850 -- # return 0 00:04:33.576 14:09:15 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:33.576 14:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:33.576 14:09:15 -- common/autotest_common.sh@10 -- # set +x 00:04:33.576 POWER: Env isn't set yet! 00:04:33.576 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:33.576 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:33.576 POWER: Cannot get available frequencies of lcore 0 00:04:33.576 POWER: Attempting to initialise PSTAT power management... 00:04:33.576 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:33.576 POWER: Initialized successfully for lcore 0 power management 00:04:33.576 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:33.576 POWER: Initialized successfully for lcore 1 power management 00:04:33.576 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:33.576 POWER: Initialized successfully for lcore 2 power management 00:04:33.576 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:33.576 POWER: Initialized successfully for lcore 3 power management 00:04:33.576 14:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:33.576 14:09:15 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:33.576 14:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:33.576 14:09:15 -- common/autotest_common.sh@10 -- # set +x 00:04:33.834 [2024-04-26 14:09:15.219827] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:33.834 14:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:33.834 14:09:15 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:33.834 14:09:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.834 14:09:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.834 14:09:15 -- common/autotest_common.sh@10 -- # set +x 00:04:33.834 ************************************ 00:04:33.834 START TEST scheduler_create_thread 00:04:33.834 ************************************ 00:04:33.834 14:09:15 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:04:33.834 14:09:15 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:33.834 14:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:33.834 14:09:15 -- common/autotest_common.sh@10 -- # set +x 00:04:33.834 2 00:04:33.834 14:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:33.834 14:09:15 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:33.834 14:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:33.834 14:09:15 -- common/autotest_common.sh@10 -- # set +x 00:04:33.834 3 00:04:33.834 14:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:33.835 14:09:15 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:33.835 14:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:33.835 14:09:15 -- common/autotest_common.sh@10 -- # set +x 00:04:33.835 4 00:04:33.835 14:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:33.835 14:09:15 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:33.835 14:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:33.835 14:09:15 -- common/autotest_common.sh@10 -- # set +x 00:04:33.835 5 00:04:33.835 14:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:33.835 14:09:15 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:33.835 14:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:33.835 14:09:15 -- common/autotest_common.sh@10 -- # set +x 00:04:33.835 6 00:04:33.835 14:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:33.835 14:09:15 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:33.835 14:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:33.835 14:09:15 -- common/autotest_common.sh@10 -- # set +x 00:04:33.835 7 00:04:33.835 14:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:33.835 14:09:15 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:33.835 14:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:33.835 14:09:15 -- common/autotest_common.sh@10 -- # set +x 00:04:34.093 8 00:04:34.093 14:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:34.093 14:09:15 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:34.093 14:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:34.093 14:09:15 -- common/autotest_common.sh@10 -- # set +x 00:04:34.093 9 00:04:34.093 14:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:34.093 14:09:15 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:34.093 14:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:34.093 14:09:15 -- common/autotest_common.sh@10 -- # set +x 00:04:34.093 10 00:04:34.093 14:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:34.093 14:09:15 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:34.093 14:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:34.093 14:09:15 -- common/autotest_common.sh@10 -- # set +x 00:04:34.093 14:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:34.093 14:09:15 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:34.093 14:09:15 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:34.093 14:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:34.093 14:09:15 -- common/autotest_common.sh@10 -- # set +x 00:04:34.660 14:09:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:34.660 14:09:15 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:34.660 14:09:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:34.660 14:09:15 -- common/autotest_common.sh@10 -- # set +x 00:04:36.032 14:09:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:36.032 14:09:17 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:36.032 14:09:17 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:36.032 14:09:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:36.032 14:09:17 -- common/autotest_common.sh@10 -- # set +x 00:04:36.968 14:09:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:36.968 00:04:36.968 real 0m3.101s 00:04:36.968 user 0m0.013s 00:04:36.968 sys 0m0.005s 00:04:36.968 14:09:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.968 14:09:18 -- common/autotest_common.sh@10 -- # set +x 00:04:36.968 ************************************ 00:04:36.968 END TEST scheduler_create_thread 00:04:36.968 ************************************ 00:04:36.968 14:09:18 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:36.968 14:09:18 -- scheduler/scheduler.sh@46 -- # killprocess 3063376 00:04:36.968 14:09:18 -- common/autotest_common.sh@936 -- # '[' -z 3063376 ']' 00:04:36.968 14:09:18 -- common/autotest_common.sh@940 -- # kill -0 3063376 00:04:36.968 14:09:18 -- common/autotest_common.sh@941 -- # uname 00:04:36.968 14:09:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:36.968 14:09:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3063376 00:04:36.968 14:09:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:36.968 14:09:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:36.968 14:09:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3063376' 00:04:36.968 killing process with pid 3063376 00:04:36.968 14:09:18 -- common/autotest_common.sh@955 -- # kill 3063376 00:04:36.968 14:09:18 -- common/autotest_common.sh@960 -- # wait 3063376 00:04:37.534 [2024-04-26 14:09:18.812325] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:37.534 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:37.534 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:37.534 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:37.534 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:37.534 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:37.534 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:37.534 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:37.534 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:37.534 00:04:37.534 real 0m4.306s 00:04:37.534 user 0m6.987s 00:04:37.534 sys 0m0.361s 00:04:37.534 14:09:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:37.534 14:09:19 -- common/autotest_common.sh@10 -- # set +x 00:04:37.534 ************************************ 00:04:37.534 END TEST event_scheduler 00:04:37.534 ************************************ 00:04:37.534 14:09:19 -- event/event.sh@51 -- # modprobe -n nbd 00:04:37.534 14:09:19 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:37.534 14:09:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:37.534 14:09:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:37.534 14:09:19 -- common/autotest_common.sh@10 -- # set +x 00:04:37.792 ************************************ 00:04:37.792 START TEST app_repeat 00:04:37.792 ************************************ 00:04:37.792 14:09:19 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:04:37.792 14:09:19 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.792 14:09:19 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.792 14:09:19 -- event/event.sh@13 -- # local nbd_list 00:04:37.792 14:09:19 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.792 14:09:19 -- event/event.sh@14 -- # local bdev_list 00:04:37.792 14:09:19 -- event/event.sh@15 -- # local repeat_times=4 00:04:37.792 14:09:19 -- event/event.sh@17 -- # modprobe nbd 00:04:37.792 14:09:19 -- event/event.sh@19 -- # repeat_pid=3063835 00:04:37.792 14:09:19 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:37.792 14:09:19 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.792 14:09:19 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3063835' 00:04:37.792 Process app_repeat pid: 3063835 00:04:37.792 14:09:19 -- event/event.sh@23 -- # for i in {0..2} 00:04:37.792 14:09:19 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:37.792 spdk_app_start Round 0 00:04:37.792 14:09:19 -- event/event.sh@25 -- # waitforlisten 3063835 /var/tmp/spdk-nbd.sock 00:04:37.792 14:09:19 -- common/autotest_common.sh@817 -- # '[' -z 3063835 ']' 00:04:37.792 14:09:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:37.792 14:09:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:37.792 14:09:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:37.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:37.792 14:09:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:37.792 14:09:19 -- common/autotest_common.sh@10 -- # set +x 00:04:37.792 [2024-04-26 14:09:19.222119] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:04:37.792 [2024-04-26 14:09:19.222189] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3063835 ] 00:04:37.792 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.792 [2024-04-26 14:09:19.280858] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.051 [2024-04-26 14:09:19.395797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.051 [2024-04-26 14:09:19.395831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.051 14:09:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:38.051 14:09:19 -- common/autotest_common.sh@850 -- # return 0 00:04:38.051 14:09:19 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:38.309 Malloc0 00:04:38.309 14:09:19 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:38.567 Malloc1 00:04:38.567 14:09:20 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:38.567 14:09:20 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.567 14:09:20 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.567 14:09:20 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:38.567 14:09:20 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.567 14:09:20 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:38.567 14:09:20 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:38.567 14:09:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.567 14:09:20 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.567 14:09:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:38.567 14:09:20 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.567 14:09:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:38.567 14:09:20 -- bdev/nbd_common.sh@12 -- # local i 00:04:38.567 14:09:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:38.567 14:09:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.567 14:09:20 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:39.134 /dev/nbd0 00:04:39.134 14:09:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:39.134 14:09:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:39.134 14:09:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:39.134 14:09:20 -- common/autotest_common.sh@855 -- # local i 00:04:39.134 14:09:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:39.134 14:09:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:39.134 14:09:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:39.134 14:09:20 -- common/autotest_common.sh@859 -- # break 00:04:39.134 14:09:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:39.134 14:09:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:39.134 14:09:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.134 1+0 records in 00:04:39.134 1+0 records out 00:04:39.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000147331 s, 27.8 MB/s 00:04:39.134 14:09:20 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.134 14:09:20 -- common/autotest_common.sh@872 -- # size=4096 00:04:39.134 14:09:20 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.134 14:09:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:39.134 14:09:20 -- common/autotest_common.sh@875 -- # return 0 00:04:39.134 14:09:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.134 14:09:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.134 14:09:20 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:39.392 /dev/nbd1 00:04:39.392 14:09:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:39.392 14:09:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:39.392 14:09:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:39.392 14:09:20 -- common/autotest_common.sh@855 -- # local i 00:04:39.392 14:09:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:39.392 14:09:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:39.392 14:09:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:39.392 14:09:20 -- common/autotest_common.sh@859 -- # break 00:04:39.392 14:09:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:39.392 14:09:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:39.392 14:09:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.392 1+0 records in 00:04:39.392 1+0 records out 00:04:39.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019874 s, 20.6 MB/s 00:04:39.392 14:09:20 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.392 14:09:20 -- common/autotest_common.sh@872 -- # size=4096 00:04:39.392 14:09:20 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.392 14:09:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:39.392 14:09:20 -- common/autotest_common.sh@875 -- # return 0 00:04:39.392 14:09:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.392 14:09:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.392 14:09:20 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:39.392 14:09:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.392 14:09:20 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:39.651 { 00:04:39.651 "nbd_device": "/dev/nbd0", 00:04:39.651 "bdev_name": "Malloc0" 00:04:39.651 }, 00:04:39.651 { 00:04:39.651 "nbd_device": "/dev/nbd1", 00:04:39.651 "bdev_name": "Malloc1" 00:04:39.651 } 00:04:39.651 ]' 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:39.651 { 00:04:39.651 "nbd_device": "/dev/nbd0", 00:04:39.651 "bdev_name": "Malloc0" 00:04:39.651 }, 00:04:39.651 { 00:04:39.651 "nbd_device": "/dev/nbd1", 00:04:39.651 "bdev_name": "Malloc1" 00:04:39.651 } 00:04:39.651 ]' 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:39.651 /dev/nbd1' 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:39.651 /dev/nbd1' 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@65 -- # count=2 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@95 -- # count=2 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:39.651 256+0 records in 00:04:39.651 256+0 records out 00:04:39.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00518514 s, 202 MB/s 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:39.651 256+0 records in 00:04:39.651 256+0 records out 00:04:39.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247739 s, 42.3 MB/s 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:39.651 256+0 records in 00:04:39.651 256+0 records out 00:04:39.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265955 s, 39.4 MB/s 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@51 -- # local i 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:39.651 14:09:21 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:39.909 14:09:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:39.909 14:09:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:39.909 14:09:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:39.909 14:09:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:39.909 14:09:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:39.909 14:09:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:39.909 14:09:21 -- bdev/nbd_common.sh@41 -- # break 00:04:39.909 14:09:21 -- bdev/nbd_common.sh@45 -- # return 0 00:04:39.909 14:09:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:39.909 14:09:21 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:40.475 14:09:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:40.475 14:09:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:40.475 14:09:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:40.475 14:09:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.475 14:09:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.475 14:09:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:40.475 14:09:21 -- bdev/nbd_common.sh@41 -- # break 00:04:40.475 14:09:21 -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.476 14:09:21 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.476 14:09:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.476 14:09:21 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.734 14:09:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:40.734 14:09:22 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:40.734 14:09:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.734 14:09:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:40.734 14:09:22 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:40.734 14:09:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.734 14:09:22 -- bdev/nbd_common.sh@65 -- # true 00:04:40.734 14:09:22 -- bdev/nbd_common.sh@65 -- # count=0 00:04:40.734 14:09:22 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:40.734 14:09:22 -- bdev/nbd_common.sh@104 -- # count=0 00:04:40.734 14:09:22 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:40.734 14:09:22 -- bdev/nbd_common.sh@109 -- # return 0 00:04:40.734 14:09:22 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:40.993 14:09:22 -- event/event.sh@35 -- # sleep 3 00:04:41.251 [2024-04-26 14:09:22.613568] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.251 [2024-04-26 14:09:22.728779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.251 [2024-04-26 14:09:22.728805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.251 [2024-04-26 14:09:22.776620] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:41.251 [2024-04-26 14:09:22.776692] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:44.532 14:09:25 -- event/event.sh@23 -- # for i in {0..2} 00:04:44.532 14:09:25 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:44.532 spdk_app_start Round 1 00:04:44.532 14:09:25 -- event/event.sh@25 -- # waitforlisten 3063835 /var/tmp/spdk-nbd.sock 00:04:44.532 14:09:25 -- common/autotest_common.sh@817 -- # '[' -z 3063835 ']' 00:04:44.532 14:09:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.532 14:09:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:44.532 14:09:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.532 14:09:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:44.532 14:09:25 -- common/autotest_common.sh@10 -- # set +x 00:04:44.532 14:09:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:44.532 14:09:25 -- common/autotest_common.sh@850 -- # return 0 00:04:44.532 14:09:25 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.532 Malloc0 00:04:44.532 14:09:26 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.790 Malloc1 00:04:44.790 14:09:26 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.790 14:09:26 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.790 14:09:26 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.790 14:09:26 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:44.790 14:09:26 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.790 14:09:26 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:44.790 14:09:26 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.790 14:09:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.790 14:09:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.790 14:09:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:44.790 14:09:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.790 14:09:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:44.790 14:09:26 -- bdev/nbd_common.sh@12 -- # local i 00:04:44.790 14:09:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:44.790 14:09:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.790 14:09:26 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:45.047 /dev/nbd0 00:04:45.305 14:09:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:45.305 14:09:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:45.305 14:09:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:45.305 14:09:26 -- common/autotest_common.sh@855 -- # local i 00:04:45.305 14:09:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:45.305 14:09:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:45.305 14:09:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:45.305 14:09:26 -- common/autotest_common.sh@859 -- # break 00:04:45.305 14:09:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:45.305 14:09:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:45.305 14:09:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.305 1+0 records in 00:04:45.305 1+0 records out 00:04:45.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196201 s, 20.9 MB/s 00:04:45.305 14:09:26 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.305 14:09:26 -- common/autotest_common.sh@872 -- # size=4096 00:04:45.305 14:09:26 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.305 14:09:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:45.305 14:09:26 -- common/autotest_common.sh@875 -- # return 0 00:04:45.305 14:09:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.305 14:09:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.305 14:09:26 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:45.562 /dev/nbd1 00:04:45.562 14:09:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:45.562 14:09:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:45.562 14:09:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:45.562 14:09:26 -- common/autotest_common.sh@855 -- # local i 00:04:45.562 14:09:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:45.562 14:09:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:45.562 14:09:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:45.562 14:09:26 -- common/autotest_common.sh@859 -- # break 00:04:45.562 14:09:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:45.562 14:09:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:45.562 14:09:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.562 1+0 records in 00:04:45.562 1+0 records out 00:04:45.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179559 s, 22.8 MB/s 00:04:45.562 14:09:26 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.562 14:09:26 -- common/autotest_common.sh@872 -- # size=4096 00:04:45.562 14:09:26 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.562 14:09:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:45.562 14:09:26 -- common/autotest_common.sh@875 -- # return 0 00:04:45.562 14:09:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.562 14:09:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.562 14:09:26 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.562 14:09:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.562 14:09:26 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:45.820 { 00:04:45.820 "nbd_device": "/dev/nbd0", 00:04:45.820 "bdev_name": "Malloc0" 00:04:45.820 }, 00:04:45.820 { 00:04:45.820 "nbd_device": "/dev/nbd1", 00:04:45.820 "bdev_name": "Malloc1" 00:04:45.820 } 00:04:45.820 ]' 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:45.820 { 00:04:45.820 "nbd_device": "/dev/nbd0", 00:04:45.820 "bdev_name": "Malloc0" 00:04:45.820 }, 00:04:45.820 { 00:04:45.820 "nbd_device": "/dev/nbd1", 00:04:45.820 "bdev_name": "Malloc1" 00:04:45.820 } 00:04:45.820 ]' 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:45.820 /dev/nbd1' 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:45.820 /dev/nbd1' 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@65 -- # count=2 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@95 -- # count=2 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:45.820 256+0 records in 00:04:45.820 256+0 records out 00:04:45.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00600215 s, 175 MB/s 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:45.820 256+0 records in 00:04:45.820 256+0 records out 00:04:45.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259941 s, 40.3 MB/s 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:45.820 256+0 records in 00:04:45.820 256+0 records out 00:04:45.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271065 s, 38.7 MB/s 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@51 -- # local i 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.820 14:09:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:46.385 14:09:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:46.385 14:09:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:46.385 14:09:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:46.385 14:09:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.385 14:09:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.385 14:09:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:46.385 14:09:27 -- bdev/nbd_common.sh@41 -- # break 00:04:46.385 14:09:27 -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.385 14:09:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.385 14:09:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:46.642 14:09:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:46.642 14:09:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:46.642 14:09:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:46.642 14:09:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.642 14:09:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.642 14:09:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:46.642 14:09:27 -- bdev/nbd_common.sh@41 -- # break 00:04:46.642 14:09:27 -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.642 14:09:27 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.642 14:09:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.642 14:09:27 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.900 14:09:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:46.900 14:09:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:46.900 14:09:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.900 14:09:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:46.900 14:09:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:46.900 14:09:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.900 14:09:28 -- bdev/nbd_common.sh@65 -- # true 00:04:46.900 14:09:28 -- bdev/nbd_common.sh@65 -- # count=0 00:04:46.900 14:09:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:46.900 14:09:28 -- bdev/nbd_common.sh@104 -- # count=0 00:04:46.900 14:09:28 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:46.900 14:09:28 -- bdev/nbd_common.sh@109 -- # return 0 00:04:46.900 14:09:28 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:47.158 14:09:28 -- event/event.sh@35 -- # sleep 3 00:04:47.417 [2024-04-26 14:09:28.808995] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.417 [2024-04-26 14:09:28.925168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.417 [2024-04-26 14:09:28.925173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.417 [2024-04-26 14:09:28.972552] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:47.417 [2024-04-26 14:09:28.972617] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:50.699 14:09:31 -- event/event.sh@23 -- # for i in {0..2} 00:04:50.699 14:09:31 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:50.699 spdk_app_start Round 2 00:04:50.699 14:09:31 -- event/event.sh@25 -- # waitforlisten 3063835 /var/tmp/spdk-nbd.sock 00:04:50.699 14:09:31 -- common/autotest_common.sh@817 -- # '[' -z 3063835 ']' 00:04:50.699 14:09:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.699 14:09:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:50.699 14:09:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.699 14:09:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:50.699 14:09:31 -- common/autotest_common.sh@10 -- # set +x 00:04:50.699 14:09:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:50.699 14:09:31 -- common/autotest_common.sh@850 -- # return 0 00:04:50.699 14:09:31 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.699 Malloc0 00:04:50.699 14:09:32 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.957 Malloc1 00:04:50.957 14:09:32 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.957 14:09:32 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.957 14:09:32 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.957 14:09:32 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:50.957 14:09:32 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.957 14:09:32 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:50.957 14:09:32 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.957 14:09:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.957 14:09:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.957 14:09:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:50.957 14:09:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.957 14:09:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:50.957 14:09:32 -- bdev/nbd_common.sh@12 -- # local i 00:04:50.957 14:09:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:50.957 14:09:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.957 14:09:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:51.524 /dev/nbd0 00:04:51.524 14:09:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:51.524 14:09:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:51.524 14:09:32 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:51.524 14:09:32 -- common/autotest_common.sh@855 -- # local i 00:04:51.524 14:09:32 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:51.524 14:09:32 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:51.524 14:09:32 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:51.524 14:09:32 -- common/autotest_common.sh@859 -- # break 00:04:51.524 14:09:32 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:51.524 14:09:32 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:51.524 14:09:32 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.524 1+0 records in 00:04:51.524 1+0 records out 00:04:51.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000172993 s, 23.7 MB/s 00:04:51.524 14:09:32 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.524 14:09:32 -- common/autotest_common.sh@872 -- # size=4096 00:04:51.524 14:09:32 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.524 14:09:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:51.524 14:09:32 -- common/autotest_common.sh@875 -- # return 0 00:04:51.524 14:09:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.524 14:09:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.524 14:09:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:51.801 /dev/nbd1 00:04:51.801 14:09:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:51.801 14:09:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:51.801 14:09:33 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:51.801 14:09:33 -- common/autotest_common.sh@855 -- # local i 00:04:51.801 14:09:33 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:51.801 14:09:33 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:51.801 14:09:33 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:51.801 14:09:33 -- common/autotest_common.sh@859 -- # break 00:04:51.801 14:09:33 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:51.801 14:09:33 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:51.801 14:09:33 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.801 1+0 records in 00:04:51.801 1+0 records out 00:04:51.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187654 s, 21.8 MB/s 00:04:51.801 14:09:33 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.801 14:09:33 -- common/autotest_common.sh@872 -- # size=4096 00:04:51.801 14:09:33 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.801 14:09:33 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:51.801 14:09:33 -- common/autotest_common.sh@875 -- # return 0 00:04:51.801 14:09:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.801 14:09:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.801 14:09:33 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.801 14:09:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.801 14:09:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:52.065 { 00:04:52.065 "nbd_device": "/dev/nbd0", 00:04:52.065 "bdev_name": "Malloc0" 00:04:52.065 }, 00:04:52.065 { 00:04:52.065 "nbd_device": "/dev/nbd1", 00:04:52.065 "bdev_name": "Malloc1" 00:04:52.065 } 00:04:52.065 ]' 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:52.065 { 00:04:52.065 "nbd_device": "/dev/nbd0", 00:04:52.065 "bdev_name": "Malloc0" 00:04:52.065 }, 00:04:52.065 { 00:04:52.065 "nbd_device": "/dev/nbd1", 00:04:52.065 "bdev_name": "Malloc1" 00:04:52.065 } 00:04:52.065 ]' 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:52.065 /dev/nbd1' 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:52.065 /dev/nbd1' 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@65 -- # count=2 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@95 -- # count=2 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:52.065 256+0 records in 00:04:52.065 256+0 records out 00:04:52.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00590593 s, 178 MB/s 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:52.065 256+0 records in 00:04:52.065 256+0 records out 00:04:52.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258938 s, 40.5 MB/s 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:52.065 256+0 records in 00:04:52.065 256+0 records out 00:04:52.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269123 s, 39.0 MB/s 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@51 -- # local i 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.065 14:09:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:52.323 14:09:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:52.323 14:09:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:52.323 14:09:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:52.323 14:09:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.323 14:09:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.323 14:09:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:52.323 14:09:33 -- bdev/nbd_common.sh@41 -- # break 00:04:52.323 14:09:33 -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.323 14:09:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.323 14:09:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:52.889 14:09:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:52.889 14:09:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:52.889 14:09:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:52.889 14:09:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.889 14:09:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.889 14:09:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:52.889 14:09:34 -- bdev/nbd_common.sh@41 -- # break 00:04:52.889 14:09:34 -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.889 14:09:34 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.889 14:09:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.889 14:09:34 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.147 14:09:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:53.147 14:09:34 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:53.147 14:09:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.147 14:09:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:53.147 14:09:34 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:53.147 14:09:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.147 14:09:34 -- bdev/nbd_common.sh@65 -- # true 00:04:53.147 14:09:34 -- bdev/nbd_common.sh@65 -- # count=0 00:04:53.147 14:09:34 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:53.147 14:09:34 -- bdev/nbd_common.sh@104 -- # count=0 00:04:53.147 14:09:34 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:53.147 14:09:34 -- bdev/nbd_common.sh@109 -- # return 0 00:04:53.147 14:09:34 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:53.405 14:09:34 -- event/event.sh@35 -- # sleep 3 00:04:53.663 [2024-04-26 14:09:35.026109] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.663 [2024-04-26 14:09:35.141966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.663 [2024-04-26 14:09:35.141989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.663 [2024-04-26 14:09:35.193086] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:53.663 [2024-04-26 14:09:35.193154] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:56.945 14:09:37 -- event/event.sh@38 -- # waitforlisten 3063835 /var/tmp/spdk-nbd.sock 00:04:56.945 14:09:37 -- common/autotest_common.sh@817 -- # '[' -z 3063835 ']' 00:04:56.945 14:09:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.945 14:09:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:56.945 14:09:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.945 14:09:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:56.945 14:09:37 -- common/autotest_common.sh@10 -- # set +x 00:04:56.945 14:09:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:56.945 14:09:38 -- common/autotest_common.sh@850 -- # return 0 00:04:56.945 14:09:38 -- event/event.sh@39 -- # killprocess 3063835 00:04:56.945 14:09:38 -- common/autotest_common.sh@936 -- # '[' -z 3063835 ']' 00:04:56.945 14:09:38 -- common/autotest_common.sh@940 -- # kill -0 3063835 00:04:56.945 14:09:38 -- common/autotest_common.sh@941 -- # uname 00:04:56.945 14:09:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:56.945 14:09:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3063835 00:04:56.945 14:09:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:56.945 14:09:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:56.945 14:09:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3063835' 00:04:56.945 killing process with pid 3063835 00:04:56.945 14:09:38 -- common/autotest_common.sh@955 -- # kill 3063835 00:04:56.945 14:09:38 -- common/autotest_common.sh@960 -- # wait 3063835 00:04:56.945 spdk_app_start is called in Round 0. 00:04:56.945 Shutdown signal received, stop current app iteration 00:04:56.945 Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 reinitialization... 00:04:56.945 spdk_app_start is called in Round 1. 00:04:56.945 Shutdown signal received, stop current app iteration 00:04:56.945 Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 reinitialization... 00:04:56.945 spdk_app_start is called in Round 2. 00:04:56.945 Shutdown signal received, stop current app iteration 00:04:56.945 Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 reinitialization... 00:04:56.945 spdk_app_start is called in Round 3. 00:04:56.945 Shutdown signal received, stop current app iteration 00:04:56.945 14:09:38 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:56.945 14:09:38 -- event/event.sh@42 -- # return 0 00:04:56.945 00:04:56.945 real 0m19.157s 00:04:56.945 user 0m42.877s 00:04:56.945 sys 0m3.498s 00:04:56.945 14:09:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:56.945 14:09:38 -- common/autotest_common.sh@10 -- # set +x 00:04:56.945 ************************************ 00:04:56.945 END TEST app_repeat 00:04:56.945 ************************************ 00:04:56.945 14:09:38 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:56.945 14:09:38 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:56.945 14:09:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.945 14:09:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.945 14:09:38 -- common/autotest_common.sh@10 -- # set +x 00:04:56.945 ************************************ 00:04:56.945 START TEST cpu_locks 00:04:56.945 ************************************ 00:04:56.945 14:09:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:57.204 * Looking for test storage... 00:04:57.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:57.204 14:09:38 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:57.204 14:09:38 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:57.204 14:09:38 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:57.204 14:09:38 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:57.204 14:09:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.204 14:09:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.204 14:09:38 -- common/autotest_common.sh@10 -- # set +x 00:04:57.204 ************************************ 00:04:57.204 START TEST default_locks 00:04:57.204 ************************************ 00:04:57.204 14:09:38 -- common/autotest_common.sh@1111 -- # default_locks 00:04:57.204 14:09:38 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3065823 00:04:57.204 14:09:38 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.204 14:09:38 -- event/cpu_locks.sh@47 -- # waitforlisten 3065823 00:04:57.204 14:09:38 -- common/autotest_common.sh@817 -- # '[' -z 3065823 ']' 00:04:57.204 14:09:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.204 14:09:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:57.204 14:09:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.204 14:09:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:57.204 14:09:38 -- common/autotest_common.sh@10 -- # set +x 00:04:57.204 [2024-04-26 14:09:38.703197] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:04:57.204 [2024-04-26 14:09:38.703287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3065823 ] 00:04:57.204 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.204 [2024-04-26 14:09:38.763743] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.463 [2024-04-26 14:09:38.880220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.721 14:09:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:57.721 14:09:39 -- common/autotest_common.sh@850 -- # return 0 00:04:57.721 14:09:39 -- event/cpu_locks.sh@49 -- # locks_exist 3065823 00:04:57.721 14:09:39 -- event/cpu_locks.sh@22 -- # lslocks -p 3065823 00:04:57.721 14:09:39 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:57.981 lslocks: write error 00:04:57.981 14:09:39 -- event/cpu_locks.sh@50 -- # killprocess 3065823 00:04:57.981 14:09:39 -- common/autotest_common.sh@936 -- # '[' -z 3065823 ']' 00:04:57.981 14:09:39 -- common/autotest_common.sh@940 -- # kill -0 3065823 00:04:57.981 14:09:39 -- common/autotest_common.sh@941 -- # uname 00:04:57.981 14:09:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:57.981 14:09:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3065823 00:04:57.981 14:09:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:57.981 14:09:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:57.981 14:09:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3065823' 00:04:57.981 killing process with pid 3065823 00:04:57.981 14:09:39 -- common/autotest_common.sh@955 -- # kill 3065823 00:04:57.981 14:09:39 -- common/autotest_common.sh@960 -- # wait 3065823 00:04:58.243 14:09:39 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3065823 00:04:58.243 14:09:39 -- common/autotest_common.sh@638 -- # local es=0 00:04:58.243 14:09:39 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3065823 00:04:58.243 14:09:39 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:04:58.243 14:09:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:58.243 14:09:39 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:04:58.243 14:09:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:58.243 14:09:39 -- common/autotest_common.sh@641 -- # waitforlisten 3065823 00:04:58.243 14:09:39 -- common/autotest_common.sh@817 -- # '[' -z 3065823 ']' 00:04:58.243 14:09:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.243 14:09:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:58.243 14:09:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.243 14:09:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:58.243 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:04:58.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3065823) - No such process 00:04:58.243 ERROR: process (pid: 3065823) is no longer running 00:04:58.243 14:09:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:58.243 14:09:39 -- common/autotest_common.sh@850 -- # return 1 00:04:58.243 14:09:39 -- common/autotest_common.sh@641 -- # es=1 00:04:58.243 14:09:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:58.243 14:09:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:58.243 14:09:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:58.243 14:09:39 -- event/cpu_locks.sh@54 -- # no_locks 00:04:58.243 14:09:39 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:58.243 14:09:39 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:58.243 14:09:39 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:58.243 00:04:58.243 real 0m1.135s 00:04:58.243 user 0m1.157s 00:04:58.243 sys 0m0.503s 00:04:58.243 14:09:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.243 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:04:58.243 ************************************ 00:04:58.243 END TEST default_locks 00:04:58.243 ************************************ 00:04:58.243 14:09:39 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:58.243 14:09:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.243 14:09:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.243 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:04:58.500 ************************************ 00:04:58.500 START TEST default_locks_via_rpc 00:04:58.500 ************************************ 00:04:58.500 14:09:39 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:04:58.500 14:09:39 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3066016 00:04:58.500 14:09:39 -- event/cpu_locks.sh@63 -- # waitforlisten 3066016 00:04:58.500 14:09:39 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.500 14:09:39 -- common/autotest_common.sh@817 -- # '[' -z 3066016 ']' 00:04:58.501 14:09:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.501 14:09:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:58.501 14:09:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.501 14:09:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:58.501 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:04:58.501 [2024-04-26 14:09:39.976766] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:04:58.501 [2024-04-26 14:09:39.976856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3066016 ] 00:04:58.501 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.501 [2024-04-26 14:09:40.037647] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.759 [2024-04-26 14:09:40.156241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.017 14:09:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:59.017 14:09:40 -- common/autotest_common.sh@850 -- # return 0 00:04:59.017 14:09:40 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:59.017 14:09:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.017 14:09:40 -- common/autotest_common.sh@10 -- # set +x 00:04:59.017 14:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:59.017 14:09:40 -- event/cpu_locks.sh@67 -- # no_locks 00:04:59.017 14:09:40 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:59.017 14:09:40 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:59.017 14:09:40 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:59.017 14:09:40 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:59.017 14:09:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.017 14:09:40 -- common/autotest_common.sh@10 -- # set +x 00:04:59.017 14:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:59.017 14:09:40 -- event/cpu_locks.sh@71 -- # locks_exist 3066016 00:04:59.017 14:09:40 -- event/cpu_locks.sh@22 -- # lslocks -p 3066016 00:04:59.017 14:09:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:59.276 14:09:40 -- event/cpu_locks.sh@73 -- # killprocess 3066016 00:04:59.276 14:09:40 -- common/autotest_common.sh@936 -- # '[' -z 3066016 ']' 00:04:59.276 14:09:40 -- common/autotest_common.sh@940 -- # kill -0 3066016 00:04:59.276 14:09:40 -- common/autotest_common.sh@941 -- # uname 00:04:59.276 14:09:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:59.276 14:09:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3066016 00:04:59.276 14:09:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:59.276 14:09:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:59.276 14:09:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3066016' 00:04:59.276 killing process with pid 3066016 00:04:59.276 14:09:40 -- common/autotest_common.sh@955 -- # kill 3066016 00:04:59.276 14:09:40 -- common/autotest_common.sh@960 -- # wait 3066016 00:04:59.842 00:04:59.842 real 0m1.192s 00:04:59.842 user 0m1.210s 00:04:59.842 sys 0m0.517s 00:04:59.842 14:09:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:59.842 14:09:41 -- common/autotest_common.sh@10 -- # set +x 00:04:59.842 ************************************ 00:04:59.842 END TEST default_locks_via_rpc 00:04:59.842 ************************************ 00:04:59.842 14:09:41 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:59.842 14:09:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.842 14:09:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.842 14:09:41 -- common/autotest_common.sh@10 -- # set +x 00:04:59.842 ************************************ 00:04:59.842 START TEST non_locking_app_on_locked_coremask 00:04:59.842 ************************************ 00:04:59.842 14:09:41 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:04:59.842 14:09:41 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3066152 00:04:59.842 14:09:41 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.842 14:09:41 -- event/cpu_locks.sh@81 -- # waitforlisten 3066152 /var/tmp/spdk.sock 00:04:59.842 14:09:41 -- common/autotest_common.sh@817 -- # '[' -z 3066152 ']' 00:04:59.842 14:09:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.842 14:09:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:59.842 14:09:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.842 14:09:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:59.842 14:09:41 -- common/autotest_common.sh@10 -- # set +x 00:04:59.842 [2024-04-26 14:09:41.301730] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:04:59.842 [2024-04-26 14:09:41.301820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3066152 ] 00:04:59.842 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.842 [2024-04-26 14:09:41.360001] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.100 [2024-04-26 14:09:41.474504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.359 14:09:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:00.359 14:09:41 -- common/autotest_common.sh@850 -- # return 0 00:05:00.359 14:09:41 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3066162 00:05:00.359 14:09:41 -- event/cpu_locks.sh@85 -- # waitforlisten 3066162 /var/tmp/spdk2.sock 00:05:00.359 14:09:41 -- common/autotest_common.sh@817 -- # '[' -z 3066162 ']' 00:05:00.359 14:09:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.359 14:09:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:00.359 14:09:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.359 14:09:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:00.359 14:09:41 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:00.359 14:09:41 -- common/autotest_common.sh@10 -- # set +x 00:05:00.359 [2024-04-26 14:09:41.771133] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:00.359 [2024-04-26 14:09:41.771238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3066162 ] 00:05:00.359 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.359 [2024-04-26 14:09:41.861610] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:00.359 [2024-04-26 14:09:41.861663] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.618 [2024-04-26 14:09:42.095734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.555 14:09:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:01.555 14:09:42 -- common/autotest_common.sh@850 -- # return 0 00:05:01.555 14:09:42 -- event/cpu_locks.sh@87 -- # locks_exist 3066152 00:05:01.555 14:09:42 -- event/cpu_locks.sh@22 -- # lslocks -p 3066152 00:05:01.555 14:09:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:02.168 lslocks: write error 00:05:02.168 14:09:43 -- event/cpu_locks.sh@89 -- # killprocess 3066152 00:05:02.168 14:09:43 -- common/autotest_common.sh@936 -- # '[' -z 3066152 ']' 00:05:02.168 14:09:43 -- common/autotest_common.sh@940 -- # kill -0 3066152 00:05:02.168 14:09:43 -- common/autotest_common.sh@941 -- # uname 00:05:02.168 14:09:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:02.168 14:09:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3066152 00:05:02.168 14:09:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:02.169 14:09:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:02.169 14:09:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3066152' 00:05:02.169 killing process with pid 3066152 00:05:02.169 14:09:43 -- common/autotest_common.sh@955 -- # kill 3066152 00:05:02.169 14:09:43 -- common/autotest_common.sh@960 -- # wait 3066152 00:05:02.759 14:09:44 -- event/cpu_locks.sh@90 -- # killprocess 3066162 00:05:02.759 14:09:44 -- common/autotest_common.sh@936 -- # '[' -z 3066162 ']' 00:05:02.759 14:09:44 -- common/autotest_common.sh@940 -- # kill -0 3066162 00:05:02.759 14:09:44 -- common/autotest_common.sh@941 -- # uname 00:05:02.759 14:09:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:02.759 14:09:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3066162 00:05:02.759 14:09:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:02.759 14:09:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:02.759 14:09:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3066162' 00:05:02.759 killing process with pid 3066162 00:05:02.759 14:09:44 -- common/autotest_common.sh@955 -- # kill 3066162 00:05:02.759 14:09:44 -- common/autotest_common.sh@960 -- # wait 3066162 00:05:03.327 00:05:03.327 real 0m3.357s 00:05:03.327 user 0m3.715s 00:05:03.327 sys 0m1.027s 00:05:03.327 14:09:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:03.327 14:09:44 -- common/autotest_common.sh@10 -- # set +x 00:05:03.327 ************************************ 00:05:03.327 END TEST non_locking_app_on_locked_coremask 00:05:03.327 ************************************ 00:05:03.327 14:09:44 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:03.327 14:09:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.327 14:09:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.327 14:09:44 -- common/autotest_common.sh@10 -- # set +x 00:05:03.327 ************************************ 00:05:03.327 START TEST locking_app_on_unlocked_coremask 00:05:03.327 ************************************ 00:05:03.327 14:09:44 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:05:03.327 14:09:44 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3066502 00:05:03.327 14:09:44 -- event/cpu_locks.sh@99 -- # waitforlisten 3066502 /var/tmp/spdk.sock 00:05:03.327 14:09:44 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:03.327 14:09:44 -- common/autotest_common.sh@817 -- # '[' -z 3066502 ']' 00:05:03.327 14:09:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.327 14:09:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:03.327 14:09:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.327 14:09:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:03.327 14:09:44 -- common/autotest_common.sh@10 -- # set +x 00:05:03.327 [2024-04-26 14:09:44.800785] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:03.327 [2024-04-26 14:09:44.800879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3066502 ] 00:05:03.327 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.327 [2024-04-26 14:09:44.859407] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:03.327 [2024-04-26 14:09:44.859439] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.585 [2024-04-26 14:09:44.974114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.843 14:09:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:03.843 14:09:45 -- common/autotest_common.sh@850 -- # return 0 00:05:03.843 14:09:45 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3066518 00:05:03.843 14:09:45 -- event/cpu_locks.sh@103 -- # waitforlisten 3066518 /var/tmp/spdk2.sock 00:05:03.843 14:09:45 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:03.843 14:09:45 -- common/autotest_common.sh@817 -- # '[' -z 3066518 ']' 00:05:03.843 14:09:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.843 14:09:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:03.843 14:09:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.843 14:09:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:03.843 14:09:45 -- common/autotest_common.sh@10 -- # set +x 00:05:03.843 [2024-04-26 14:09:45.271961] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:03.843 [2024-04-26 14:09:45.272053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3066518 ] 00:05:03.843 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.843 [2024-04-26 14:09:45.362936] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.102 [2024-04-26 14:09:45.595593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.038 14:09:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:05.038 14:09:46 -- common/autotest_common.sh@850 -- # return 0 00:05:05.038 14:09:46 -- event/cpu_locks.sh@105 -- # locks_exist 3066518 00:05:05.038 14:09:46 -- event/cpu_locks.sh@22 -- # lslocks -p 3066518 00:05:05.038 14:09:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.605 lslocks: write error 00:05:05.605 14:09:46 -- event/cpu_locks.sh@107 -- # killprocess 3066502 00:05:05.605 14:09:46 -- common/autotest_common.sh@936 -- # '[' -z 3066502 ']' 00:05:05.605 14:09:46 -- common/autotest_common.sh@940 -- # kill -0 3066502 00:05:05.605 14:09:46 -- common/autotest_common.sh@941 -- # uname 00:05:05.605 14:09:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:05.605 14:09:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3066502 00:05:05.605 14:09:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:05.605 14:09:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:05.605 14:09:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3066502' 00:05:05.605 killing process with pid 3066502 00:05:05.605 14:09:47 -- common/autotest_common.sh@955 -- # kill 3066502 00:05:05.605 14:09:47 -- common/autotest_common.sh@960 -- # wait 3066502 00:05:06.171 14:09:47 -- event/cpu_locks.sh@108 -- # killprocess 3066518 00:05:06.171 14:09:47 -- common/autotest_common.sh@936 -- # '[' -z 3066518 ']' 00:05:06.171 14:09:47 -- common/autotest_common.sh@940 -- # kill -0 3066518 00:05:06.171 14:09:47 -- common/autotest_common.sh@941 -- # uname 00:05:06.171 14:09:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:06.171 14:09:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3066518 00:05:06.171 14:09:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:06.171 14:09:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:06.171 14:09:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3066518' 00:05:06.171 killing process with pid 3066518 00:05:06.171 14:09:47 -- common/autotest_common.sh@955 -- # kill 3066518 00:05:06.171 14:09:47 -- common/autotest_common.sh@960 -- # wait 3066518 00:05:06.737 00:05:06.738 real 0m3.253s 00:05:06.738 user 0m3.600s 00:05:06.738 sys 0m1.049s 00:05:06.738 14:09:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:06.738 14:09:47 -- common/autotest_common.sh@10 -- # set +x 00:05:06.738 ************************************ 00:05:06.738 END TEST locking_app_on_unlocked_coremask 00:05:06.738 ************************************ 00:05:06.738 14:09:48 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:06.738 14:09:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.738 14:09:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.738 14:09:48 -- common/autotest_common.sh@10 -- # set +x 00:05:06.738 ************************************ 00:05:06.738 START TEST locking_app_on_locked_coremask 00:05:06.738 ************************************ 00:05:06.738 14:09:48 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:05:06.738 14:09:48 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3066854 00:05:06.738 14:09:48 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.738 14:09:48 -- event/cpu_locks.sh@116 -- # waitforlisten 3066854 /var/tmp/spdk.sock 00:05:06.738 14:09:48 -- common/autotest_common.sh@817 -- # '[' -z 3066854 ']' 00:05:06.738 14:09:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.738 14:09:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:06.738 14:09:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.738 14:09:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:06.738 14:09:48 -- common/autotest_common.sh@10 -- # set +x 00:05:06.738 [2024-04-26 14:09:48.197545] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:06.738 [2024-04-26 14:09:48.197656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3066854 ] 00:05:06.738 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.738 [2024-04-26 14:09:48.257336] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.996 [2024-04-26 14:09:48.375051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.254 14:09:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:07.254 14:09:48 -- common/autotest_common.sh@850 -- # return 0 00:05:07.254 14:09:48 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3066939 00:05:07.254 14:09:48 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3066939 /var/tmp/spdk2.sock 00:05:07.254 14:09:48 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:07.254 14:09:48 -- common/autotest_common.sh@638 -- # local es=0 00:05:07.254 14:09:48 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3066939 /var/tmp/spdk2.sock 00:05:07.254 14:09:48 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:07.254 14:09:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:07.254 14:09:48 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:07.254 14:09:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:07.254 14:09:48 -- common/autotest_common.sh@641 -- # waitforlisten 3066939 /var/tmp/spdk2.sock 00:05:07.254 14:09:48 -- common/autotest_common.sh@817 -- # '[' -z 3066939 ']' 00:05:07.254 14:09:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.254 14:09:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:07.254 14:09:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.254 14:09:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:07.254 14:09:48 -- common/autotest_common.sh@10 -- # set +x 00:05:07.254 [2024-04-26 14:09:48.654811] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:07.254 [2024-04-26 14:09:48.654914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3066939 ] 00:05:07.254 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.254 [2024-04-26 14:09:48.743497] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3066854 has claimed it. 00:05:07.254 [2024-04-26 14:09:48.743556] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:07.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3066939) - No such process 00:05:07.845 ERROR: process (pid: 3066939) is no longer running 00:05:07.845 14:09:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:07.845 14:09:49 -- common/autotest_common.sh@850 -- # return 1 00:05:07.845 14:09:49 -- common/autotest_common.sh@641 -- # es=1 00:05:07.845 14:09:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:07.845 14:09:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:07.845 14:09:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:07.845 14:09:49 -- event/cpu_locks.sh@122 -- # locks_exist 3066854 00:05:08.102 14:09:49 -- event/cpu_locks.sh@22 -- # lslocks -p 3066854 00:05:08.102 14:09:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.360 lslocks: write error 00:05:08.360 14:09:49 -- event/cpu_locks.sh@124 -- # killprocess 3066854 00:05:08.360 14:09:49 -- common/autotest_common.sh@936 -- # '[' -z 3066854 ']' 00:05:08.360 14:09:49 -- common/autotest_common.sh@940 -- # kill -0 3066854 00:05:08.360 14:09:49 -- common/autotest_common.sh@941 -- # uname 00:05:08.360 14:09:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:08.360 14:09:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3066854 00:05:08.360 14:09:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:08.360 14:09:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:08.360 14:09:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3066854' 00:05:08.360 killing process with pid 3066854 00:05:08.360 14:09:49 -- common/autotest_common.sh@955 -- # kill 3066854 00:05:08.360 14:09:49 -- common/autotest_common.sh@960 -- # wait 3066854 00:05:08.618 00:05:08.618 real 0m1.979s 00:05:08.618 user 0m2.265s 00:05:08.618 sys 0m0.622s 00:05:08.618 14:09:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:08.618 14:09:50 -- common/autotest_common.sh@10 -- # set +x 00:05:08.618 ************************************ 00:05:08.618 END TEST locking_app_on_locked_coremask 00:05:08.618 ************************************ 00:05:08.618 14:09:50 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:08.618 14:09:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:08.618 14:09:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.618 14:09:50 -- common/autotest_common.sh@10 -- # set +x 00:05:08.877 ************************************ 00:05:08.877 START TEST locking_overlapped_coremask 00:05:08.877 ************************************ 00:05:08.877 14:09:50 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:05:08.877 14:09:50 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3067093 00:05:08.877 14:09:50 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:08.877 14:09:50 -- event/cpu_locks.sh@133 -- # waitforlisten 3067093 /var/tmp/spdk.sock 00:05:08.877 14:09:50 -- common/autotest_common.sh@817 -- # '[' -z 3067093 ']' 00:05:08.877 14:09:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.877 14:09:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:08.877 14:09:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.877 14:09:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:08.877 14:09:50 -- common/autotest_common.sh@10 -- # set +x 00:05:08.877 [2024-04-26 14:09:50.319457] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:08.877 [2024-04-26 14:09:50.319561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3067093 ] 00:05:08.877 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.877 [2024-04-26 14:09:50.379169] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:09.135 [2024-04-26 14:09:50.498208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.135 [2024-04-26 14:09:50.501652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.135 [2024-04-26 14:09:50.501692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.394 14:09:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:09.394 14:09:50 -- common/autotest_common.sh@850 -- # return 0 00:05:09.394 14:09:50 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3067117 00:05:09.394 14:09:50 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:09.394 14:09:50 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3067117 /var/tmp/spdk2.sock 00:05:09.394 14:09:50 -- common/autotest_common.sh@638 -- # local es=0 00:05:09.394 14:09:50 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 3067117 /var/tmp/spdk2.sock 00:05:09.394 14:09:50 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:09.394 14:09:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:09.394 14:09:50 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:09.394 14:09:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:09.394 14:09:50 -- common/autotest_common.sh@641 -- # waitforlisten 3067117 /var/tmp/spdk2.sock 00:05:09.394 14:09:50 -- common/autotest_common.sh@817 -- # '[' -z 3067117 ']' 00:05:09.394 14:09:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.394 14:09:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:09.394 14:09:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.394 14:09:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:09.394 14:09:50 -- common/autotest_common.sh@10 -- # set +x 00:05:09.394 [2024-04-26 14:09:50.787111] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:09.394 [2024-04-26 14:09:50.787211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3067117 ] 00:05:09.394 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.394 [2024-04-26 14:09:50.877263] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3067093 has claimed it. 00:05:09.394 [2024-04-26 14:09:50.877317] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:09.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (3067117) - No such process 00:05:09.960 ERROR: process (pid: 3067117) is no longer running 00:05:09.960 14:09:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:09.960 14:09:51 -- common/autotest_common.sh@850 -- # return 1 00:05:09.960 14:09:51 -- common/autotest_common.sh@641 -- # es=1 00:05:09.960 14:09:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:09.960 14:09:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:09.960 14:09:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:09.960 14:09:51 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:09.960 14:09:51 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:09.960 14:09:51 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:09.960 14:09:51 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:09.960 14:09:51 -- event/cpu_locks.sh@141 -- # killprocess 3067093 00:05:09.960 14:09:51 -- common/autotest_common.sh@936 -- # '[' -z 3067093 ']' 00:05:09.960 14:09:51 -- common/autotest_common.sh@940 -- # kill -0 3067093 00:05:09.960 14:09:51 -- common/autotest_common.sh@941 -- # uname 00:05:09.960 14:09:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:10.217 14:09:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3067093 00:05:10.217 14:09:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:10.217 14:09:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:10.217 14:09:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3067093' 00:05:10.217 killing process with pid 3067093 00:05:10.217 14:09:51 -- common/autotest_common.sh@955 -- # kill 3067093 00:05:10.217 14:09:51 -- common/autotest_common.sh@960 -- # wait 3067093 00:05:10.476 00:05:10.476 real 0m1.618s 00:05:10.476 user 0m4.373s 00:05:10.476 sys 0m0.411s 00:05:10.476 14:09:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:10.476 14:09:51 -- common/autotest_common.sh@10 -- # set +x 00:05:10.476 ************************************ 00:05:10.476 END TEST locking_overlapped_coremask 00:05:10.476 ************************************ 00:05:10.476 14:09:51 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:10.476 14:09:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.476 14:09:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.476 14:09:51 -- common/autotest_common.sh@10 -- # set +x 00:05:10.476 ************************************ 00:05:10.476 START TEST locking_overlapped_coremask_via_rpc 00:05:10.476 ************************************ 00:05:10.476 14:09:52 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:05:10.476 14:09:52 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3067323 00:05:10.476 14:09:52 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:10.476 14:09:52 -- event/cpu_locks.sh@149 -- # waitforlisten 3067323 /var/tmp/spdk.sock 00:05:10.476 14:09:52 -- common/autotest_common.sh@817 -- # '[' -z 3067323 ']' 00:05:10.476 14:09:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.476 14:09:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:10.476 14:09:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.476 14:09:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:10.476 14:09:52 -- common/autotest_common.sh@10 -- # set +x 00:05:10.734 [2024-04-26 14:09:52.072314] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:10.734 [2024-04-26 14:09:52.072400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3067323 ] 00:05:10.734 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.734 [2024-04-26 14:09:52.131198] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:10.734 [2024-04-26 14:09:52.131235] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:10.734 [2024-04-26 14:09:52.247509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.734 [2024-04-26 14:09:52.247560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.734 [2024-04-26 14:09:52.247564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.991 14:09:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:10.992 14:09:52 -- common/autotest_common.sh@850 -- # return 0 00:05:10.992 14:09:52 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3067340 00:05:10.992 14:09:52 -- event/cpu_locks.sh@153 -- # waitforlisten 3067340 /var/tmp/spdk2.sock 00:05:10.992 14:09:52 -- common/autotest_common.sh@817 -- # '[' -z 3067340 ']' 00:05:10.992 14:09:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.992 14:09:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:10.992 14:09:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.992 14:09:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:10.992 14:09:52 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:10.992 14:09:52 -- common/autotest_common.sh@10 -- # set +x 00:05:10.992 [2024-04-26 14:09:52.535388] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:10.992 [2024-04-26 14:09:52.535495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3067340 ] 00:05:11.250 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.250 [2024-04-26 14:09:52.623688] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.250 [2024-04-26 14:09:52.623729] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.508 [2024-04-26 14:09:52.856769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:11.508 [2024-04-26 14:09:52.860688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:11.508 [2024-04-26 14:09:52.860691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.075 14:09:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:12.075 14:09:53 -- common/autotest_common.sh@850 -- # return 0 00:05:12.075 14:09:53 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:12.075 14:09:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.075 14:09:53 -- common/autotest_common.sh@10 -- # set +x 00:05:12.075 14:09:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:12.075 14:09:53 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.075 14:09:53 -- common/autotest_common.sh@638 -- # local es=0 00:05:12.075 14:09:53 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.075 14:09:53 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:12.075 14:09:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:12.075 14:09:53 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:12.075 14:09:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:12.075 14:09:53 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.075 14:09:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:12.075 14:09:53 -- common/autotest_common.sh@10 -- # set +x 00:05:12.075 [2024-04-26 14:09:53.576750] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3067323 has claimed it. 00:05:12.075 request: 00:05:12.075 { 00:05:12.075 "method": "framework_enable_cpumask_locks", 00:05:12.075 "req_id": 1 00:05:12.075 } 00:05:12.075 Got JSON-RPC error response 00:05:12.075 response: 00:05:12.075 { 00:05:12.075 "code": -32603, 00:05:12.075 "message": "Failed to claim CPU core: 2" 00:05:12.075 } 00:05:12.075 14:09:53 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:12.075 14:09:53 -- common/autotest_common.sh@641 -- # es=1 00:05:12.075 14:09:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:12.075 14:09:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:12.075 14:09:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:12.075 14:09:53 -- event/cpu_locks.sh@158 -- # waitforlisten 3067323 /var/tmp/spdk.sock 00:05:12.075 14:09:53 -- common/autotest_common.sh@817 -- # '[' -z 3067323 ']' 00:05:12.075 14:09:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.075 14:09:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:12.075 14:09:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.075 14:09:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:12.075 14:09:53 -- common/autotest_common.sh@10 -- # set +x 00:05:12.333 14:09:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:12.334 14:09:53 -- common/autotest_common.sh@850 -- # return 0 00:05:12.334 14:09:53 -- event/cpu_locks.sh@159 -- # waitforlisten 3067340 /var/tmp/spdk2.sock 00:05:12.334 14:09:53 -- common/autotest_common.sh@817 -- # '[' -z 3067340 ']' 00:05:12.334 14:09:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.334 14:09:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:12.334 14:09:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.334 14:09:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:12.334 14:09:53 -- common/autotest_common.sh@10 -- # set +x 00:05:12.899 14:09:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:12.899 14:09:54 -- common/autotest_common.sh@850 -- # return 0 00:05:12.899 14:09:54 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:12.899 14:09:54 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:12.899 14:09:54 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:12.899 14:09:54 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:12.899 00:05:12.899 real 0m2.167s 00:05:12.899 user 0m1.241s 00:05:12.899 sys 0m0.186s 00:05:12.899 14:09:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:12.899 14:09:54 -- common/autotest_common.sh@10 -- # set +x 00:05:12.899 ************************************ 00:05:12.899 END TEST locking_overlapped_coremask_via_rpc 00:05:12.899 ************************************ 00:05:12.899 14:09:54 -- event/cpu_locks.sh@174 -- # cleanup 00:05:12.899 14:09:54 -- event/cpu_locks.sh@15 -- # [[ -z 3067323 ]] 00:05:12.899 14:09:54 -- event/cpu_locks.sh@15 -- # killprocess 3067323 00:05:12.899 14:09:54 -- common/autotest_common.sh@936 -- # '[' -z 3067323 ']' 00:05:12.899 14:09:54 -- common/autotest_common.sh@940 -- # kill -0 3067323 00:05:12.899 14:09:54 -- common/autotest_common.sh@941 -- # uname 00:05:12.899 14:09:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:12.899 14:09:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3067323 00:05:12.899 14:09:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:12.899 14:09:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:12.899 14:09:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3067323' 00:05:12.899 killing process with pid 3067323 00:05:12.899 14:09:54 -- common/autotest_common.sh@955 -- # kill 3067323 00:05:12.899 14:09:54 -- common/autotest_common.sh@960 -- # wait 3067323 00:05:13.158 14:09:54 -- event/cpu_locks.sh@16 -- # [[ -z 3067340 ]] 00:05:13.158 14:09:54 -- event/cpu_locks.sh@16 -- # killprocess 3067340 00:05:13.158 14:09:54 -- common/autotest_common.sh@936 -- # '[' -z 3067340 ']' 00:05:13.158 14:09:54 -- common/autotest_common.sh@940 -- # kill -0 3067340 00:05:13.158 14:09:54 -- common/autotest_common.sh@941 -- # uname 00:05:13.158 14:09:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:13.158 14:09:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3067340 00:05:13.158 14:09:54 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:13.158 14:09:54 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:13.158 14:09:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3067340' 00:05:13.158 killing process with pid 3067340 00:05:13.158 14:09:54 -- common/autotest_common.sh@955 -- # kill 3067340 00:05:13.158 14:09:54 -- common/autotest_common.sh@960 -- # wait 3067340 00:05:13.416 14:09:54 -- event/cpu_locks.sh@18 -- # rm -f 00:05:13.416 14:09:54 -- event/cpu_locks.sh@1 -- # cleanup 00:05:13.416 14:09:54 -- event/cpu_locks.sh@15 -- # [[ -z 3067323 ]] 00:05:13.416 14:09:54 -- event/cpu_locks.sh@15 -- # killprocess 3067323 00:05:13.416 14:09:54 -- common/autotest_common.sh@936 -- # '[' -z 3067323 ']' 00:05:13.416 14:09:54 -- common/autotest_common.sh@940 -- # kill -0 3067323 00:05:13.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3067323) - No such process 00:05:13.416 14:09:54 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3067323 is not found' 00:05:13.416 Process with pid 3067323 is not found 00:05:13.416 14:09:54 -- event/cpu_locks.sh@16 -- # [[ -z 3067340 ]] 00:05:13.416 14:09:54 -- event/cpu_locks.sh@16 -- # killprocess 3067340 00:05:13.416 14:09:54 -- common/autotest_common.sh@936 -- # '[' -z 3067340 ']' 00:05:13.416 14:09:54 -- common/autotest_common.sh@940 -- # kill -0 3067340 00:05:13.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3067340) - No such process 00:05:13.416 14:09:54 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3067340 is not found' 00:05:13.416 Process with pid 3067340 is not found 00:05:13.416 14:09:54 -- event/cpu_locks.sh@18 -- # rm -f 00:05:13.416 00:05:13.416 real 0m16.449s 00:05:13.416 user 0m28.774s 00:05:13.416 sys 0m5.407s 00:05:13.416 14:09:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:13.416 14:09:54 -- common/autotest_common.sh@10 -- # set +x 00:05:13.417 ************************************ 00:05:13.417 END TEST cpu_locks 00:05:13.417 ************************************ 00:05:13.417 00:05:13.417 real 0m44.695s 00:05:13.417 user 1m25.612s 00:05:13.417 sys 0m9.981s 00:05:13.417 14:09:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:13.417 14:09:54 -- common/autotest_common.sh@10 -- # set +x 00:05:13.417 ************************************ 00:05:13.417 END TEST event 00:05:13.417 ************************************ 00:05:13.675 14:09:54 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:13.675 14:09:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.675 14:09:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.675 14:09:54 -- common/autotest_common.sh@10 -- # set +x 00:05:13.675 ************************************ 00:05:13.675 START TEST thread 00:05:13.675 ************************************ 00:05:13.675 14:09:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:13.675 * Looking for test storage... 00:05:13.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:13.675 14:09:55 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:13.675 14:09:55 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:13.675 14:09:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.675 14:09:55 -- common/autotest_common.sh@10 -- # set +x 00:05:13.934 ************************************ 00:05:13.934 START TEST thread_poller_perf 00:05:13.934 ************************************ 00:05:13.934 14:09:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:13.934 [2024-04-26 14:09:55.276382] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:13.934 [2024-04-26 14:09:55.276451] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3067742 ] 00:05:13.934 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.934 [2024-04-26 14:09:55.334841] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.934 [2024-04-26 14:09:55.448802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.934 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:15.308 ====================================== 00:05:15.308 busy:2710303508 (cyc) 00:05:15.308 total_run_count: 261000 00:05:15.308 tsc_hz: 2700000000 (cyc) 00:05:15.308 ====================================== 00:05:15.308 poller_cost: 10384 (cyc), 3845 (nsec) 00:05:15.308 00:05:15.308 real 0m1.303s 00:05:15.308 user 0m1.233s 00:05:15.308 sys 0m0.064s 00:05:15.308 14:09:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:15.308 14:09:56 -- common/autotest_common.sh@10 -- # set +x 00:05:15.308 ************************************ 00:05:15.308 END TEST thread_poller_perf 00:05:15.308 ************************************ 00:05:15.308 14:09:56 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:15.308 14:09:56 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:15.308 14:09:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.308 14:09:56 -- common/autotest_common.sh@10 -- # set +x 00:05:15.308 ************************************ 00:05:15.308 START TEST thread_poller_perf 00:05:15.308 ************************************ 00:05:15.308 14:09:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:15.308 [2024-04-26 14:09:56.718325] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:15.308 [2024-04-26 14:09:56.718394] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3067879 ] 00:05:15.308 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.308 [2024-04-26 14:09:56.776998] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.567 [2024-04-26 14:09:56.894035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.567 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:16.503 ====================================== 00:05:16.503 busy:2702913808 (cyc) 00:05:16.503 total_run_count: 3651000 00:05:16.503 tsc_hz: 2700000000 (cyc) 00:05:16.503 ====================================== 00:05:16.503 poller_cost: 740 (cyc), 274 (nsec) 00:05:16.503 00:05:16.503 real 0m1.300s 00:05:16.503 user 0m1.213s 00:05:16.503 sys 0m0.079s 00:05:16.503 14:09:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:16.503 14:09:58 -- common/autotest_common.sh@10 -- # set +x 00:05:16.503 ************************************ 00:05:16.503 END TEST thread_poller_perf 00:05:16.503 ************************************ 00:05:16.503 14:09:58 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:16.503 00:05:16.503 real 0m2.917s 00:05:16.503 user 0m2.567s 00:05:16.503 sys 0m0.310s 00:05:16.503 14:09:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:16.503 14:09:58 -- common/autotest_common.sh@10 -- # set +x 00:05:16.503 ************************************ 00:05:16.503 END TEST thread 00:05:16.503 ************************************ 00:05:16.503 14:09:58 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:16.503 14:09:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.503 14:09:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.503 14:09:58 -- common/autotest_common.sh@10 -- # set +x 00:05:16.762 ************************************ 00:05:16.762 START TEST accel 00:05:16.762 ************************************ 00:05:16.762 14:09:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:16.762 * Looking for test storage... 00:05:16.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:16.762 14:09:58 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:16.762 14:09:58 -- accel/accel.sh@82 -- # get_expected_opcs 00:05:16.762 14:09:58 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:16.762 14:09:58 -- accel/accel.sh@62 -- # spdk_tgt_pid=3068054 00:05:16.762 14:09:58 -- accel/accel.sh@63 -- # waitforlisten 3068054 00:05:16.762 14:09:58 -- common/autotest_common.sh@817 -- # '[' -z 3068054 ']' 00:05:16.762 14:09:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.762 14:09:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:16.762 14:09:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.762 14:09:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:16.762 14:09:58 -- common/autotest_common.sh@10 -- # set +x 00:05:16.762 14:09:58 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:16.762 14:09:58 -- accel/accel.sh@61 -- # build_accel_config 00:05:16.762 14:09:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:16.762 14:09:58 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:16.762 14:09:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.762 14:09:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.762 14:09:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:16.762 14:09:58 -- accel/accel.sh@40 -- # local IFS=, 00:05:16.762 14:09:58 -- accel/accel.sh@41 -- # jq -r . 00:05:16.762 [2024-04-26 14:09:58.267096] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:16.762 [2024-04-26 14:09:58.267205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3068054 ] 00:05:16.762 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.762 [2024-04-26 14:09:58.328059] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.020 [2024-04-26 14:09:58.445710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.279 14:09:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:17.279 14:09:58 -- common/autotest_common.sh@850 -- # return 0 00:05:17.279 14:09:58 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:17.279 14:09:58 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:17.279 14:09:58 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:17.279 14:09:58 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:17.279 14:09:58 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:17.279 14:09:58 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:17.279 14:09:58 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:17.279 14:09:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.279 14:09:58 -- common/autotest_common.sh@10 -- # set +x 00:05:17.279 14:09:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.279 14:09:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # IFS== 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # read -r opc module 00:05:17.279 14:09:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.279 14:09:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # IFS== 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # read -r opc module 00:05:17.279 14:09:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.279 14:09:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # IFS== 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # read -r opc module 00:05:17.279 14:09:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.279 14:09:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # IFS== 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # read -r opc module 00:05:17.279 14:09:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.279 14:09:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # IFS== 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # read -r opc module 00:05:17.279 14:09:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.279 14:09:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # IFS== 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # read -r opc module 00:05:17.279 14:09:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.279 14:09:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # IFS== 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # read -r opc module 00:05:17.279 14:09:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.279 14:09:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # IFS== 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # read -r opc module 00:05:17.279 14:09:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.279 14:09:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # IFS== 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # read -r opc module 00:05:17.279 14:09:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.279 14:09:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # IFS== 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # read -r opc module 00:05:17.279 14:09:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.279 14:09:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # IFS== 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # read -r opc module 00:05:17.279 14:09:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.279 14:09:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # IFS== 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # read -r opc module 00:05:17.279 14:09:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.279 14:09:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # IFS== 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # read -r opc module 00:05:17.279 14:09:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.279 14:09:58 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # IFS== 00:05:17.279 14:09:58 -- accel/accel.sh@72 -- # read -r opc module 00:05:17.279 14:09:58 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.279 14:09:58 -- accel/accel.sh@75 -- # killprocess 3068054 00:05:17.279 14:09:58 -- common/autotest_common.sh@936 -- # '[' -z 3068054 ']' 00:05:17.279 14:09:58 -- common/autotest_common.sh@940 -- # kill -0 3068054 00:05:17.280 14:09:58 -- common/autotest_common.sh@941 -- # uname 00:05:17.280 14:09:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:17.280 14:09:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3068054 00:05:17.280 14:09:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:17.280 14:09:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:17.280 14:09:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3068054' 00:05:17.280 killing process with pid 3068054 00:05:17.280 14:09:58 -- common/autotest_common.sh@955 -- # kill 3068054 00:05:17.280 14:09:58 -- common/autotest_common.sh@960 -- # wait 3068054 00:05:17.537 14:09:59 -- accel/accel.sh@76 -- # trap - ERR 00:05:17.537 14:09:59 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:17.537 14:09:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:17.537 14:09:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.537 14:09:59 -- common/autotest_common.sh@10 -- # set +x 00:05:17.796 14:09:59 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:05:17.796 14:09:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:17.796 14:09:59 -- accel/accel.sh@12 -- # build_accel_config 00:05:17.796 14:09:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.796 14:09:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.796 14:09:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.796 14:09:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.796 14:09:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.796 14:09:59 -- accel/accel.sh@40 -- # local IFS=, 00:05:17.796 14:09:59 -- accel/accel.sh@41 -- # jq -r . 00:05:17.796 14:09:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.796 14:09:59 -- common/autotest_common.sh@10 -- # set +x 00:05:17.796 14:09:59 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:17.796 14:09:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:17.796 14:09:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.796 14:09:59 -- common/autotest_common.sh@10 -- # set +x 00:05:17.796 ************************************ 00:05:17.796 START TEST accel_missing_filename 00:05:17.796 ************************************ 00:05:17.796 14:09:59 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:05:17.796 14:09:59 -- common/autotest_common.sh@638 -- # local es=0 00:05:17.796 14:09:59 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:17.796 14:09:59 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:17.796 14:09:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:17.796 14:09:59 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:17.796 14:09:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:17.796 14:09:59 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:05:17.796 14:09:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:17.796 14:09:59 -- accel/accel.sh@12 -- # build_accel_config 00:05:17.796 14:09:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:17.796 14:09:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:17.796 14:09:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.796 14:09:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.796 14:09:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:17.796 14:09:59 -- accel/accel.sh@40 -- # local IFS=, 00:05:17.796 14:09:59 -- accel/accel.sh@41 -- # jq -r . 00:05:17.796 [2024-04-26 14:09:59.345061] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:17.796 [2024-04-26 14:09:59.345131] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3068203 ] 00:05:18.055 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.055 [2024-04-26 14:09:59.404405] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.055 [2024-04-26 14:09:59.521801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.055 [2024-04-26 14:09:59.571953] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:18.055 [2024-04-26 14:09:59.620770] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:05:18.314 A filename is required. 00:05:18.314 14:09:59 -- common/autotest_common.sh@641 -- # es=234 00:05:18.314 14:09:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:18.314 14:09:59 -- common/autotest_common.sh@650 -- # es=106 00:05:18.314 14:09:59 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:18.314 14:09:59 -- common/autotest_common.sh@658 -- # es=1 00:05:18.314 14:09:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:18.314 00:05:18.314 real 0m0.404s 00:05:18.314 user 0m0.308s 00:05:18.314 sys 0m0.131s 00:05:18.314 14:09:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.314 14:09:59 -- common/autotest_common.sh@10 -- # set +x 00:05:18.314 ************************************ 00:05:18.314 END TEST accel_missing_filename 00:05:18.314 ************************************ 00:05:18.314 14:09:59 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.314 14:09:59 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:18.314 14:09:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.314 14:09:59 -- common/autotest_common.sh@10 -- # set +x 00:05:18.314 ************************************ 00:05:18.314 START TEST accel_compress_verify 00:05:18.314 ************************************ 00:05:18.314 14:09:59 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.314 14:09:59 -- common/autotest_common.sh@638 -- # local es=0 00:05:18.314 14:09:59 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.314 14:09:59 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:18.314 14:09:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:18.314 14:09:59 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:18.314 14:09:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:18.314 14:09:59 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.314 14:09:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.314 14:09:59 -- accel/accel.sh@12 -- # build_accel_config 00:05:18.314 14:09:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.314 14:09:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.314 14:09:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.314 14:09:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.314 14:09:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.314 14:09:59 -- accel/accel.sh@40 -- # local IFS=, 00:05:18.314 14:09:59 -- accel/accel.sh@41 -- # jq -r . 00:05:18.572 [2024-04-26 14:09:59.886200] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:18.572 [2024-04-26 14:09:59.886272] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3068319 ] 00:05:18.572 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.572 [2024-04-26 14:09:59.944839] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.572 [2024-04-26 14:10:00.063295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.572 [2024-04-26 14:10:00.115080] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:18.831 [2024-04-26 14:10:00.163580] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:05:18.831 00:05:18.831 Compression does not support the verify option, aborting. 00:05:18.831 14:10:00 -- common/autotest_common.sh@641 -- # es=161 00:05:18.831 14:10:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:18.831 14:10:00 -- common/autotest_common.sh@650 -- # es=33 00:05:18.831 14:10:00 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:18.831 14:10:00 -- common/autotest_common.sh@658 -- # es=1 00:05:18.831 14:10:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:18.831 00:05:18.831 real 0m0.408s 00:05:18.831 user 0m0.319s 00:05:18.831 sys 0m0.125s 00:05:18.831 14:10:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.831 14:10:00 -- common/autotest_common.sh@10 -- # set +x 00:05:18.831 ************************************ 00:05:18.831 END TEST accel_compress_verify 00:05:18.831 ************************************ 00:05:18.831 14:10:00 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:18.831 14:10:00 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:18.832 14:10:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.832 14:10:00 -- common/autotest_common.sh@10 -- # set +x 00:05:19.092 ************************************ 00:05:19.092 START TEST accel_wrong_workload 00:05:19.092 ************************************ 00:05:19.092 14:10:00 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:05:19.092 14:10:00 -- common/autotest_common.sh@638 -- # local es=0 00:05:19.092 14:10:00 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:19.092 14:10:00 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:19.092 14:10:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:19.092 14:10:00 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:19.092 14:10:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:19.092 14:10:00 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:05:19.092 14:10:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:19.092 14:10:00 -- accel/accel.sh@12 -- # build_accel_config 00:05:19.092 14:10:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.092 14:10:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.092 14:10:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.092 14:10:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.092 14:10:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.092 14:10:00 -- accel/accel.sh@40 -- # local IFS=, 00:05:19.092 14:10:00 -- accel/accel.sh@41 -- # jq -r . 00:05:19.092 Unsupported workload type: foobar 00:05:19.092 [2024-04-26 14:10:00.429625] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:19.092 accel_perf options: 00:05:19.092 [-h help message] 00:05:19.092 [-q queue depth per core] 00:05:19.092 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:19.092 [-T number of threads per core 00:05:19.092 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:19.092 [-t time in seconds] 00:05:19.092 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:19.092 [ dif_verify, , dif_generate, dif_generate_copy 00:05:19.092 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:19.092 [-l for compress/decompress workloads, name of uncompressed input file 00:05:19.092 [-S for crc32c workload, use this seed value (default 0) 00:05:19.092 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:19.092 [-f for fill workload, use this BYTE value (default 255) 00:05:19.092 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:19.092 [-y verify result if this switch is on] 00:05:19.092 [-a tasks to allocate per core (default: same value as -q)] 00:05:19.092 Can be used to spread operations across a wider range of memory. 00:05:19.092 14:10:00 -- common/autotest_common.sh@641 -- # es=1 00:05:19.092 14:10:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:19.092 14:10:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:19.092 14:10:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:19.092 00:05:19.092 real 0m0.025s 00:05:19.092 user 0m0.018s 00:05:19.092 sys 0m0.007s 00:05:19.092 14:10:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:19.092 14:10:00 -- common/autotest_common.sh@10 -- # set +x 00:05:19.092 ************************************ 00:05:19.092 END TEST accel_wrong_workload 00:05:19.092 ************************************ 00:05:19.092 Error: writing output failed: Broken pipe 00:05:19.092 14:10:00 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:19.092 14:10:00 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:19.092 14:10:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.092 14:10:00 -- common/autotest_common.sh@10 -- # set +x 00:05:19.092 ************************************ 00:05:19.092 START TEST accel_negative_buffers 00:05:19.092 ************************************ 00:05:19.092 14:10:00 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:19.092 14:10:00 -- common/autotest_common.sh@638 -- # local es=0 00:05:19.092 14:10:00 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:19.092 14:10:00 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:19.092 14:10:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:19.092 14:10:00 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:19.092 14:10:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:19.092 14:10:00 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:05:19.092 14:10:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:19.092 14:10:00 -- accel/accel.sh@12 -- # build_accel_config 00:05:19.092 14:10:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.092 14:10:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.092 14:10:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.092 14:10:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.092 14:10:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.092 14:10:00 -- accel/accel.sh@40 -- # local IFS=, 00:05:19.092 14:10:00 -- accel/accel.sh@41 -- # jq -r . 00:05:19.092 -x option must be non-negative. 00:05:19.092 [2024-04-26 14:10:00.577173] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:19.092 accel_perf options: 00:05:19.092 [-h help message] 00:05:19.092 [-q queue depth per core] 00:05:19.092 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:19.092 [-T number of threads per core 00:05:19.092 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:19.092 [-t time in seconds] 00:05:19.092 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:19.092 [ dif_verify, , dif_generate, dif_generate_copy 00:05:19.092 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:19.092 [-l for compress/decompress workloads, name of uncompressed input file 00:05:19.092 [-S for crc32c workload, use this seed value (default 0) 00:05:19.092 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:19.092 [-f for fill workload, use this BYTE value (default 255) 00:05:19.092 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:19.092 [-y verify result if this switch is on] 00:05:19.092 [-a tasks to allocate per core (default: same value as -q)] 00:05:19.092 Can be used to spread operations across a wider range of memory. 00:05:19.092 14:10:00 -- common/autotest_common.sh@641 -- # es=1 00:05:19.092 14:10:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:19.092 14:10:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:19.092 14:10:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:19.092 00:05:19.092 real 0m0.023s 00:05:19.092 user 0m0.014s 00:05:19.092 sys 0m0.009s 00:05:19.092 14:10:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:19.092 14:10:00 -- common/autotest_common.sh@10 -- # set +x 00:05:19.092 ************************************ 00:05:19.092 END TEST accel_negative_buffers 00:05:19.092 ************************************ 00:05:19.092 Error: writing output failed: Broken pipe 00:05:19.092 14:10:00 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:19.092 14:10:00 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:19.092 14:10:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.092 14:10:00 -- common/autotest_common.sh@10 -- # set +x 00:05:19.352 ************************************ 00:05:19.352 START TEST accel_crc32c 00:05:19.352 ************************************ 00:05:19.352 14:10:00 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:19.352 14:10:00 -- accel/accel.sh@16 -- # local accel_opc 00:05:19.352 14:10:00 -- accel/accel.sh@17 -- # local accel_module 00:05:19.352 14:10:00 -- accel/accel.sh@19 -- # IFS=: 00:05:19.352 14:10:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:19.352 14:10:00 -- accel/accel.sh@19 -- # read -r var val 00:05:19.352 14:10:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:19.352 14:10:00 -- accel/accel.sh@12 -- # build_accel_config 00:05:19.352 14:10:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.352 14:10:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.352 14:10:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.352 14:10:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.352 14:10:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.352 14:10:00 -- accel/accel.sh@40 -- # local IFS=, 00:05:19.352 14:10:00 -- accel/accel.sh@41 -- # jq -r . 00:05:19.352 [2024-04-26 14:10:00.734997] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:19.352 [2024-04-26 14:10:00.735067] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3068459 ] 00:05:19.352 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.352 [2024-04-26 14:10:00.794435] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.352 [2024-04-26 14:10:00.912118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.610 14:10:00 -- accel/accel.sh@20 -- # val= 00:05:19.610 14:10:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # IFS=: 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # read -r var val 00:05:19.610 14:10:00 -- accel/accel.sh@20 -- # val= 00:05:19.610 14:10:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # IFS=: 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # read -r var val 00:05:19.610 14:10:00 -- accel/accel.sh@20 -- # val=0x1 00:05:19.610 14:10:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # IFS=: 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # read -r var val 00:05:19.610 14:10:00 -- accel/accel.sh@20 -- # val= 00:05:19.610 14:10:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # IFS=: 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # read -r var val 00:05:19.610 14:10:00 -- accel/accel.sh@20 -- # val= 00:05:19.610 14:10:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # IFS=: 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # read -r var val 00:05:19.610 14:10:00 -- accel/accel.sh@20 -- # val=crc32c 00:05:19.610 14:10:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.610 14:10:00 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # IFS=: 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # read -r var val 00:05:19.610 14:10:00 -- accel/accel.sh@20 -- # val=32 00:05:19.610 14:10:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # IFS=: 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # read -r var val 00:05:19.610 14:10:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:19.610 14:10:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # IFS=: 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # read -r var val 00:05:19.610 14:10:00 -- accel/accel.sh@20 -- # val= 00:05:19.610 14:10:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # IFS=: 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # read -r var val 00:05:19.610 14:10:00 -- accel/accel.sh@20 -- # val=software 00:05:19.610 14:10:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.610 14:10:00 -- accel/accel.sh@22 -- # accel_module=software 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # IFS=: 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # read -r var val 00:05:19.610 14:10:00 -- accel/accel.sh@20 -- # val=32 00:05:19.610 14:10:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # IFS=: 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # read -r var val 00:05:19.610 14:10:00 -- accel/accel.sh@20 -- # val=32 00:05:19.610 14:10:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # IFS=: 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # read -r var val 00:05:19.610 14:10:00 -- accel/accel.sh@20 -- # val=1 00:05:19.610 14:10:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # IFS=: 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # read -r var val 00:05:19.610 14:10:00 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:19.610 14:10:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # IFS=: 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # read -r var val 00:05:19.610 14:10:00 -- accel/accel.sh@20 -- # val=Yes 00:05:19.610 14:10:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # IFS=: 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # read -r var val 00:05:19.610 14:10:00 -- accel/accel.sh@20 -- # val= 00:05:19.610 14:10:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # IFS=: 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # read -r var val 00:05:19.610 14:10:00 -- accel/accel.sh@20 -- # val= 00:05:19.610 14:10:00 -- accel/accel.sh@21 -- # case "$var" in 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # IFS=: 00:05:19.610 14:10:00 -- accel/accel.sh@19 -- # read -r var val 00:05:20.983 14:10:02 -- accel/accel.sh@20 -- # val= 00:05:20.983 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.983 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.983 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.983 14:10:02 -- accel/accel.sh@20 -- # val= 00:05:20.983 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.983 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.983 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val= 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val= 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val= 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val= 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.984 14:10:02 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:20.984 14:10:02 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.984 00:05:20.984 real 0m1.409s 00:05:20.984 user 0m1.287s 00:05:20.984 sys 0m0.122s 00:05:20.984 14:10:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:20.984 14:10:02 -- common/autotest_common.sh@10 -- # set +x 00:05:20.984 ************************************ 00:05:20.984 END TEST accel_crc32c 00:05:20.984 ************************************ 00:05:20.984 14:10:02 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:20.984 14:10:02 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:20.984 14:10:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.984 14:10:02 -- common/autotest_common.sh@10 -- # set +x 00:05:20.984 ************************************ 00:05:20.984 START TEST accel_crc32c_C2 00:05:20.984 ************************************ 00:05:20.984 14:10:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:20.984 14:10:02 -- accel/accel.sh@16 -- # local accel_opc 00:05:20.984 14:10:02 -- accel/accel.sh@17 -- # local accel_module 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:20.984 14:10:02 -- accel/accel.sh@12 -- # build_accel_config 00:05:20.984 14:10:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.984 14:10:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.984 14:10:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.984 14:10:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.984 14:10:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.984 14:10:02 -- accel/accel.sh@40 -- # local IFS=, 00:05:20.984 14:10:02 -- accel/accel.sh@41 -- # jq -r . 00:05:20.984 [2024-04-26 14:10:02.278338] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:20.984 [2024-04-26 14:10:02.278406] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3068630 ] 00:05:20.984 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.984 [2024-04-26 14:10:02.336929] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.984 [2024-04-26 14:10:02.454150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val= 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val= 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val=0x1 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val= 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val= 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val=crc32c 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val=0 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val= 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val=software 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@22 -- # accel_module=software 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val=32 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val=32 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val=1 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val=Yes 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val= 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:20.984 14:10:02 -- accel/accel.sh@20 -- # val= 00:05:20.984 14:10:02 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # IFS=: 00:05:20.984 14:10:02 -- accel/accel.sh@19 -- # read -r var val 00:05:22.358 14:10:03 -- accel/accel.sh@20 -- # val= 00:05:22.358 14:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.358 14:10:03 -- accel/accel.sh@19 -- # IFS=: 00:05:22.358 14:10:03 -- accel/accel.sh@19 -- # read -r var val 00:05:22.358 14:10:03 -- accel/accel.sh@20 -- # val= 00:05:22.358 14:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.358 14:10:03 -- accel/accel.sh@19 -- # IFS=: 00:05:22.358 14:10:03 -- accel/accel.sh@19 -- # read -r var val 00:05:22.358 14:10:03 -- accel/accel.sh@20 -- # val= 00:05:22.358 14:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.358 14:10:03 -- accel/accel.sh@19 -- # IFS=: 00:05:22.358 14:10:03 -- accel/accel.sh@19 -- # read -r var val 00:05:22.358 14:10:03 -- accel/accel.sh@20 -- # val= 00:05:22.358 14:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.358 14:10:03 -- accel/accel.sh@19 -- # IFS=: 00:05:22.358 14:10:03 -- accel/accel.sh@19 -- # read -r var val 00:05:22.358 14:10:03 -- accel/accel.sh@20 -- # val= 00:05:22.358 14:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.358 14:10:03 -- accel/accel.sh@19 -- # IFS=: 00:05:22.358 14:10:03 -- accel/accel.sh@19 -- # read -r var val 00:05:22.358 14:10:03 -- accel/accel.sh@20 -- # val= 00:05:22.358 14:10:03 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.358 14:10:03 -- accel/accel.sh@19 -- # IFS=: 00:05:22.358 14:10:03 -- accel/accel.sh@19 -- # read -r var val 00:05:22.358 14:10:03 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:22.358 14:10:03 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:22.358 14:10:03 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:22.358 00:05:22.358 real 0m1.406s 00:05:22.358 user 0m1.283s 00:05:22.358 sys 0m0.122s 00:05:22.358 14:10:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:22.358 14:10:03 -- common/autotest_common.sh@10 -- # set +x 00:05:22.358 ************************************ 00:05:22.358 END TEST accel_crc32c_C2 00:05:22.358 ************************************ 00:05:22.358 14:10:03 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:22.358 14:10:03 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:22.358 14:10:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.358 14:10:03 -- common/autotest_common.sh@10 -- # set +x 00:05:22.358 ************************************ 00:05:22.358 START TEST accel_copy 00:05:22.358 ************************************ 00:05:22.358 14:10:03 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:05:22.358 14:10:03 -- accel/accel.sh@16 -- # local accel_opc 00:05:22.358 14:10:03 -- accel/accel.sh@17 -- # local accel_module 00:05:22.358 14:10:03 -- accel/accel.sh@19 -- # IFS=: 00:05:22.358 14:10:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:22.358 14:10:03 -- accel/accel.sh@19 -- # read -r var val 00:05:22.358 14:10:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:22.358 14:10:03 -- accel/accel.sh@12 -- # build_accel_config 00:05:22.358 14:10:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.358 14:10:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.358 14:10:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.358 14:10:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.358 14:10:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.358 14:10:03 -- accel/accel.sh@40 -- # local IFS=, 00:05:22.358 14:10:03 -- accel/accel.sh@41 -- # jq -r . 00:05:22.358 [2024-04-26 14:10:03.821447] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:22.358 [2024-04-26 14:10:03.821518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3068761 ] 00:05:22.358 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.358 [2024-04-26 14:10:03.881032] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.617 [2024-04-26 14:10:03.999158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.617 14:10:04 -- accel/accel.sh@20 -- # val= 00:05:22.617 14:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:22.617 14:10:04 -- accel/accel.sh@20 -- # val= 00:05:22.617 14:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:22.617 14:10:04 -- accel/accel.sh@20 -- # val=0x1 00:05:22.617 14:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:22.617 14:10:04 -- accel/accel.sh@20 -- # val= 00:05:22.617 14:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:22.617 14:10:04 -- accel/accel.sh@20 -- # val= 00:05:22.617 14:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:22.617 14:10:04 -- accel/accel.sh@20 -- # val=copy 00:05:22.617 14:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.617 14:10:04 -- accel/accel.sh@23 -- # accel_opc=copy 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:22.617 14:10:04 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:22.617 14:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:22.617 14:10:04 -- accel/accel.sh@20 -- # val= 00:05:22.617 14:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:22.617 14:10:04 -- accel/accel.sh@20 -- # val=software 00:05:22.617 14:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.617 14:10:04 -- accel/accel.sh@22 -- # accel_module=software 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:22.617 14:10:04 -- accel/accel.sh@20 -- # val=32 00:05:22.617 14:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:22.617 14:10:04 -- accel/accel.sh@20 -- # val=32 00:05:22.617 14:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:22.617 14:10:04 -- accel/accel.sh@20 -- # val=1 00:05:22.617 14:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:22.617 14:10:04 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:22.617 14:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:22.617 14:10:04 -- accel/accel.sh@20 -- # val=Yes 00:05:22.617 14:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:22.617 14:10:04 -- accel/accel.sh@20 -- # val= 00:05:22.617 14:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:22.617 14:10:04 -- accel/accel.sh@20 -- # val= 00:05:22.617 14:10:04 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # IFS=: 00:05:22.617 14:10:04 -- accel/accel.sh@19 -- # read -r var val 00:05:23.991 14:10:05 -- accel/accel.sh@20 -- # val= 00:05:23.991 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.991 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:23.991 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:23.991 14:10:05 -- accel/accel.sh@20 -- # val= 00:05:23.991 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.991 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:23.991 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:23.991 14:10:05 -- accel/accel.sh@20 -- # val= 00:05:23.991 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.991 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:23.991 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:23.991 14:10:05 -- accel/accel.sh@20 -- # val= 00:05:23.991 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.991 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:23.991 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:23.991 14:10:05 -- accel/accel.sh@20 -- # val= 00:05:23.991 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.991 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:23.991 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:23.991 14:10:05 -- accel/accel.sh@20 -- # val= 00:05:23.991 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:23.991 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:23.991 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:23.991 14:10:05 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.991 14:10:05 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:23.991 14:10:05 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.991 00:05:23.991 real 0m1.412s 00:05:23.991 user 0m1.289s 00:05:23.991 sys 0m0.122s 00:05:23.991 14:10:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.991 14:10:05 -- common/autotest_common.sh@10 -- # set +x 00:05:23.991 ************************************ 00:05:23.991 END TEST accel_copy 00:05:23.991 ************************************ 00:05:23.991 14:10:05 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:23.991 14:10:05 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:23.991 14:10:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.991 14:10:05 -- common/autotest_common.sh@10 -- # set +x 00:05:23.991 ************************************ 00:05:23.991 START TEST accel_fill 00:05:23.991 ************************************ 00:05:23.991 14:10:05 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:23.991 14:10:05 -- accel/accel.sh@16 -- # local accel_opc 00:05:23.991 14:10:05 -- accel/accel.sh@17 -- # local accel_module 00:05:23.991 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:23.991 14:10:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:23.991 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:23.991 14:10:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:23.991 14:10:05 -- accel/accel.sh@12 -- # build_accel_config 00:05:23.991 14:10:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.991 14:10:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.991 14:10:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.991 14:10:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.991 14:10:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.991 14:10:05 -- accel/accel.sh@40 -- # local IFS=, 00:05:23.991 14:10:05 -- accel/accel.sh@41 -- # jq -r . 00:05:23.991 [2024-04-26 14:10:05.369320] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:23.991 [2024-04-26 14:10:05.369396] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3068976 ] 00:05:23.991 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.991 [2024-04-26 14:10:05.428138] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.991 [2024-04-26 14:10:05.542980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.250 14:10:05 -- accel/accel.sh@20 -- # val= 00:05:24.250 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:24.250 14:10:05 -- accel/accel.sh@20 -- # val= 00:05:24.250 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:24.250 14:10:05 -- accel/accel.sh@20 -- # val=0x1 00:05:24.250 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:24.250 14:10:05 -- accel/accel.sh@20 -- # val= 00:05:24.250 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:24.250 14:10:05 -- accel/accel.sh@20 -- # val= 00:05:24.250 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:24.250 14:10:05 -- accel/accel.sh@20 -- # val=fill 00:05:24.250 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.250 14:10:05 -- accel/accel.sh@23 -- # accel_opc=fill 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:24.250 14:10:05 -- accel/accel.sh@20 -- # val=0x80 00:05:24.250 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:24.250 14:10:05 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.250 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:24.250 14:10:05 -- accel/accel.sh@20 -- # val= 00:05:24.250 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:24.250 14:10:05 -- accel/accel.sh@20 -- # val=software 00:05:24.250 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.250 14:10:05 -- accel/accel.sh@22 -- # accel_module=software 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:24.250 14:10:05 -- accel/accel.sh@20 -- # val=64 00:05:24.250 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:24.250 14:10:05 -- accel/accel.sh@20 -- # val=64 00:05:24.250 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:24.250 14:10:05 -- accel/accel.sh@20 -- # val=1 00:05:24.250 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:24.250 14:10:05 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:24.250 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:24.250 14:10:05 -- accel/accel.sh@20 -- # val=Yes 00:05:24.250 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:24.250 14:10:05 -- accel/accel.sh@20 -- # val= 00:05:24.250 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.250 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:24.251 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:24.251 14:10:05 -- accel/accel.sh@20 -- # val= 00:05:24.251 14:10:05 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.251 14:10:05 -- accel/accel.sh@19 -- # IFS=: 00:05:24.251 14:10:05 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:06 -- accel/accel.sh@20 -- # val= 00:05:25.626 14:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:06 -- accel/accel.sh@20 -- # val= 00:05:25.626 14:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:06 -- accel/accel.sh@20 -- # val= 00:05:25.626 14:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:06 -- accel/accel.sh@20 -- # val= 00:05:25.626 14:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:06 -- accel/accel.sh@20 -- # val= 00:05:25.626 14:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:06 -- accel/accel.sh@20 -- # val= 00:05:25.626 14:10:06 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:06 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:25.626 14:10:06 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:25.626 14:10:06 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.626 00:05:25.626 real 0m1.406s 00:05:25.626 user 0m1.280s 00:05:25.626 sys 0m0.126s 00:05:25.626 14:10:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:25.626 14:10:06 -- common/autotest_common.sh@10 -- # set +x 00:05:25.626 ************************************ 00:05:25.626 END TEST accel_fill 00:05:25.626 ************************************ 00:05:25.626 14:10:06 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:25.626 14:10:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:25.626 14:10:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.626 14:10:06 -- common/autotest_common.sh@10 -- # set +x 00:05:25.626 ************************************ 00:05:25.626 START TEST accel_copy_crc32c 00:05:25.626 ************************************ 00:05:25.626 14:10:06 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:05:25.626 14:10:06 -- accel/accel.sh@16 -- # local accel_opc 00:05:25.626 14:10:06 -- accel/accel.sh@17 -- # local accel_module 00:05:25.626 14:10:06 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:25.626 14:10:06 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:25.626 14:10:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.626 14:10:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.626 14:10:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.626 14:10:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.626 14:10:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.626 14:10:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.626 14:10:06 -- accel/accel.sh@40 -- # local IFS=, 00:05:25.626 14:10:06 -- accel/accel.sh@41 -- # jq -r . 00:05:25.626 [2024-04-26 14:10:06.917488] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:25.626 [2024-04-26 14:10:06.917564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3069108 ] 00:05:25.626 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.626 [2024-04-26 14:10:06.976928] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.626 [2024-04-26 14:10:07.094095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.626 14:10:07 -- accel/accel.sh@20 -- # val= 00:05:25.626 14:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:07 -- accel/accel.sh@20 -- # val= 00:05:25.626 14:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:07 -- accel/accel.sh@20 -- # val=0x1 00:05:25.626 14:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:07 -- accel/accel.sh@20 -- # val= 00:05:25.626 14:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:07 -- accel/accel.sh@20 -- # val= 00:05:25.626 14:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:07 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:25.626 14:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:07 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:07 -- accel/accel.sh@20 -- # val=0 00:05:25.626 14:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:07 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:25.626 14:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:07 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:25.626 14:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:07 -- accel/accel.sh@20 -- # val= 00:05:25.626 14:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:07 -- accel/accel.sh@20 -- # val=software 00:05:25.626 14:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:07 -- accel/accel.sh@22 -- # accel_module=software 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:07 -- accel/accel.sh@20 -- # val=32 00:05:25.626 14:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:07 -- accel/accel.sh@20 -- # val=32 00:05:25.626 14:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:07 -- accel/accel.sh@20 -- # val=1 00:05:25.626 14:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:07 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:25.626 14:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:07 -- accel/accel.sh@20 -- # val=Yes 00:05:25.626 14:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:25.626 14:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:25.626 14:10:07 -- accel/accel.sh@20 -- # val= 00:05:25.626 14:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.627 14:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:25.627 14:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:25.627 14:10:07 -- accel/accel.sh@20 -- # val= 00:05:25.627 14:10:07 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.627 14:10:07 -- accel/accel.sh@19 -- # IFS=: 00:05:25.627 14:10:07 -- accel/accel.sh@19 -- # read -r var val 00:05:27.001 14:10:08 -- accel/accel.sh@20 -- # val= 00:05:27.001 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.001 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.001 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.001 14:10:08 -- accel/accel.sh@20 -- # val= 00:05:27.001 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.001 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.001 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.001 14:10:08 -- accel/accel.sh@20 -- # val= 00:05:27.001 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.001 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.001 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.001 14:10:08 -- accel/accel.sh@20 -- # val= 00:05:27.001 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.001 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.001 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.001 14:10:08 -- accel/accel.sh@20 -- # val= 00:05:27.001 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.001 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.001 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.001 14:10:08 -- accel/accel.sh@20 -- # val= 00:05:27.001 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.001 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.001 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.001 14:10:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.001 14:10:08 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:27.001 14:10:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.001 00:05:27.001 real 0m1.412s 00:05:27.001 user 0m1.295s 00:05:27.001 sys 0m0.118s 00:05:27.001 14:10:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:27.001 14:10:08 -- common/autotest_common.sh@10 -- # set +x 00:05:27.001 ************************************ 00:05:27.001 END TEST accel_copy_crc32c 00:05:27.001 ************************************ 00:05:27.001 14:10:08 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:27.001 14:10:08 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:27.001 14:10:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.001 14:10:08 -- common/autotest_common.sh@10 -- # set +x 00:05:27.001 ************************************ 00:05:27.001 START TEST accel_copy_crc32c_C2 00:05:27.001 ************************************ 00:05:27.001 14:10:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:27.001 14:10:08 -- accel/accel.sh@16 -- # local accel_opc 00:05:27.001 14:10:08 -- accel/accel.sh@17 -- # local accel_module 00:05:27.001 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.001 14:10:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:27.001 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.001 14:10:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:27.001 14:10:08 -- accel/accel.sh@12 -- # build_accel_config 00:05:27.002 14:10:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.002 14:10:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.002 14:10:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.002 14:10:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.002 14:10:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.002 14:10:08 -- accel/accel.sh@40 -- # local IFS=, 00:05:27.002 14:10:08 -- accel/accel.sh@41 -- # jq -r . 00:05:27.002 [2024-04-26 14:10:08.453458] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:27.002 [2024-04-26 14:10:08.453542] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3069271 ] 00:05:27.002 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.002 [2024-04-26 14:10:08.513134] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.260 [2024-04-26 14:10:08.631388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.260 14:10:08 -- accel/accel.sh@20 -- # val= 00:05:27.260 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.260 14:10:08 -- accel/accel.sh@20 -- # val= 00:05:27.260 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.260 14:10:08 -- accel/accel.sh@20 -- # val=0x1 00:05:27.260 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.260 14:10:08 -- accel/accel.sh@20 -- # val= 00:05:27.260 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.260 14:10:08 -- accel/accel.sh@20 -- # val= 00:05:27.260 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.260 14:10:08 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:27.260 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.260 14:10:08 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.260 14:10:08 -- accel/accel.sh@20 -- # val=0 00:05:27.260 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.260 14:10:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:27.260 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.260 14:10:08 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:27.260 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.260 14:10:08 -- accel/accel.sh@20 -- # val= 00:05:27.260 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.260 14:10:08 -- accel/accel.sh@20 -- # val=software 00:05:27.260 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.260 14:10:08 -- accel/accel.sh@22 -- # accel_module=software 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.260 14:10:08 -- accel/accel.sh@20 -- # val=32 00:05:27.260 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.260 14:10:08 -- accel/accel.sh@20 -- # val=32 00:05:27.260 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.260 14:10:08 -- accel/accel.sh@20 -- # val=1 00:05:27.260 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.260 14:10:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:27.260 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.260 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.261 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.261 14:10:08 -- accel/accel.sh@20 -- # val=Yes 00:05:27.261 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.261 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.261 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.261 14:10:08 -- accel/accel.sh@20 -- # val= 00:05:27.261 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.261 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.261 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:27.261 14:10:08 -- accel/accel.sh@20 -- # val= 00:05:27.261 14:10:08 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.261 14:10:08 -- accel/accel.sh@19 -- # IFS=: 00:05:27.261 14:10:08 -- accel/accel.sh@19 -- # read -r var val 00:05:28.635 14:10:09 -- accel/accel.sh@20 -- # val= 00:05:28.635 14:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.635 14:10:09 -- accel/accel.sh@19 -- # IFS=: 00:05:28.635 14:10:09 -- accel/accel.sh@19 -- # read -r var val 00:05:28.635 14:10:09 -- accel/accel.sh@20 -- # val= 00:05:28.635 14:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.635 14:10:09 -- accel/accel.sh@19 -- # IFS=: 00:05:28.635 14:10:09 -- accel/accel.sh@19 -- # read -r var val 00:05:28.635 14:10:09 -- accel/accel.sh@20 -- # val= 00:05:28.635 14:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.635 14:10:09 -- accel/accel.sh@19 -- # IFS=: 00:05:28.635 14:10:09 -- accel/accel.sh@19 -- # read -r var val 00:05:28.635 14:10:09 -- accel/accel.sh@20 -- # val= 00:05:28.635 14:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.635 14:10:09 -- accel/accel.sh@19 -- # IFS=: 00:05:28.635 14:10:09 -- accel/accel.sh@19 -- # read -r var val 00:05:28.635 14:10:09 -- accel/accel.sh@20 -- # val= 00:05:28.635 14:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.635 14:10:09 -- accel/accel.sh@19 -- # IFS=: 00:05:28.635 14:10:09 -- accel/accel.sh@19 -- # read -r var val 00:05:28.635 14:10:09 -- accel/accel.sh@20 -- # val= 00:05:28.635 14:10:09 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.635 14:10:09 -- accel/accel.sh@19 -- # IFS=: 00:05:28.635 14:10:09 -- accel/accel.sh@19 -- # read -r var val 00:05:28.635 14:10:09 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:28.635 14:10:09 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:28.635 14:10:09 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.635 00:05:28.635 real 0m1.412s 00:05:28.635 user 0m1.288s 00:05:28.635 sys 0m0.124s 00:05:28.635 14:10:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:28.635 14:10:09 -- common/autotest_common.sh@10 -- # set +x 00:05:28.635 ************************************ 00:05:28.635 END TEST accel_copy_crc32c_C2 00:05:28.635 ************************************ 00:05:28.635 14:10:09 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:28.635 14:10:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:28.635 14:10:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.635 14:10:09 -- common/autotest_common.sh@10 -- # set +x 00:05:28.635 ************************************ 00:05:28.635 START TEST accel_dualcast 00:05:28.635 ************************************ 00:05:28.635 14:10:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:05:28.635 14:10:09 -- accel/accel.sh@16 -- # local accel_opc 00:05:28.635 14:10:09 -- accel/accel.sh@17 -- # local accel_module 00:05:28.635 14:10:09 -- accel/accel.sh@19 -- # IFS=: 00:05:28.635 14:10:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:28.635 14:10:09 -- accel/accel.sh@19 -- # read -r var val 00:05:28.635 14:10:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:28.635 14:10:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:28.635 14:10:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.635 14:10:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.635 14:10:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.635 14:10:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.635 14:10:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.635 14:10:09 -- accel/accel.sh@40 -- # local IFS=, 00:05:28.635 14:10:09 -- accel/accel.sh@41 -- # jq -r . 00:05:28.635 [2024-04-26 14:10:09.991931] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:28.635 [2024-04-26 14:10:09.992008] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3069460 ] 00:05:28.635 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.635 [2024-04-26 14:10:10.053903] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.635 [2024-04-26 14:10:10.171437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.903 14:10:10 -- accel/accel.sh@20 -- # val= 00:05:28.903 14:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # IFS=: 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # read -r var val 00:05:28.904 14:10:10 -- accel/accel.sh@20 -- # val= 00:05:28.904 14:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # IFS=: 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # read -r var val 00:05:28.904 14:10:10 -- accel/accel.sh@20 -- # val=0x1 00:05:28.904 14:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # IFS=: 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # read -r var val 00:05:28.904 14:10:10 -- accel/accel.sh@20 -- # val= 00:05:28.904 14:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # IFS=: 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # read -r var val 00:05:28.904 14:10:10 -- accel/accel.sh@20 -- # val= 00:05:28.904 14:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # IFS=: 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # read -r var val 00:05:28.904 14:10:10 -- accel/accel.sh@20 -- # val=dualcast 00:05:28.904 14:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.904 14:10:10 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # IFS=: 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # read -r var val 00:05:28.904 14:10:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.904 14:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # IFS=: 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # read -r var val 00:05:28.904 14:10:10 -- accel/accel.sh@20 -- # val= 00:05:28.904 14:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # IFS=: 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # read -r var val 00:05:28.904 14:10:10 -- accel/accel.sh@20 -- # val=software 00:05:28.904 14:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.904 14:10:10 -- accel/accel.sh@22 -- # accel_module=software 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # IFS=: 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # read -r var val 00:05:28.904 14:10:10 -- accel/accel.sh@20 -- # val=32 00:05:28.904 14:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # IFS=: 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # read -r var val 00:05:28.904 14:10:10 -- accel/accel.sh@20 -- # val=32 00:05:28.904 14:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # IFS=: 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # read -r var val 00:05:28.904 14:10:10 -- accel/accel.sh@20 -- # val=1 00:05:28.904 14:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # IFS=: 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # read -r var val 00:05:28.904 14:10:10 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.904 14:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # IFS=: 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # read -r var val 00:05:28.904 14:10:10 -- accel/accel.sh@20 -- # val=Yes 00:05:28.904 14:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # IFS=: 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # read -r var val 00:05:28.904 14:10:10 -- accel/accel.sh@20 -- # val= 00:05:28.904 14:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # IFS=: 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # read -r var val 00:05:28.904 14:10:10 -- accel/accel.sh@20 -- # val= 00:05:28.904 14:10:10 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # IFS=: 00:05:28.904 14:10:10 -- accel/accel.sh@19 -- # read -r var val 00:05:29.890 14:10:11 -- accel/accel.sh@20 -- # val= 00:05:29.890 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.890 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:29.890 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:29.890 14:10:11 -- accel/accel.sh@20 -- # val= 00:05:29.890 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.890 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:29.890 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:29.890 14:10:11 -- accel/accel.sh@20 -- # val= 00:05:29.890 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.890 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:29.890 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:29.890 14:10:11 -- accel/accel.sh@20 -- # val= 00:05:29.890 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.890 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:29.890 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:29.890 14:10:11 -- accel/accel.sh@20 -- # val= 00:05:29.890 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.890 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:29.890 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:29.890 14:10:11 -- accel/accel.sh@20 -- # val= 00:05:29.890 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:29.890 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:29.890 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:29.890 14:10:11 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:29.890 14:10:11 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:29.890 14:10:11 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.890 00:05:29.890 real 0m1.417s 00:05:29.890 user 0m1.291s 00:05:29.890 sys 0m0.125s 00:05:29.890 14:10:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:29.890 14:10:11 -- common/autotest_common.sh@10 -- # set +x 00:05:29.890 ************************************ 00:05:29.890 END TEST accel_dualcast 00:05:29.890 ************************************ 00:05:29.890 14:10:11 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:29.890 14:10:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:29.890 14:10:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.890 14:10:11 -- common/autotest_common.sh@10 -- # set +x 00:05:30.159 ************************************ 00:05:30.159 START TEST accel_compare 00:05:30.159 ************************************ 00:05:30.159 14:10:11 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:05:30.159 14:10:11 -- accel/accel.sh@16 -- # local accel_opc 00:05:30.159 14:10:11 -- accel/accel.sh@17 -- # local accel_module 00:05:30.159 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:30.159 14:10:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:30.159 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:30.159 14:10:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:30.159 14:10:11 -- accel/accel.sh@12 -- # build_accel_config 00:05:30.159 14:10:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.159 14:10:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.159 14:10:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.159 14:10:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.159 14:10:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.159 14:10:11 -- accel/accel.sh@40 -- # local IFS=, 00:05:30.159 14:10:11 -- accel/accel.sh@41 -- # jq -r . 00:05:30.159 [2024-04-26 14:10:11.542349] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:30.159 [2024-04-26 14:10:11.542416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3069593 ] 00:05:30.159 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.159 [2024-04-26 14:10:11.602179] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.159 [2024-04-26 14:10:11.719993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.468 14:10:11 -- accel/accel.sh@20 -- # val= 00:05:30.468 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:30.468 14:10:11 -- accel/accel.sh@20 -- # val= 00:05:30.468 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:30.468 14:10:11 -- accel/accel.sh@20 -- # val=0x1 00:05:30.468 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:30.468 14:10:11 -- accel/accel.sh@20 -- # val= 00:05:30.468 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:30.468 14:10:11 -- accel/accel.sh@20 -- # val= 00:05:30.468 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:30.468 14:10:11 -- accel/accel.sh@20 -- # val=compare 00:05:30.468 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.468 14:10:11 -- accel/accel.sh@23 -- # accel_opc=compare 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:30.468 14:10:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:30.468 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:30.468 14:10:11 -- accel/accel.sh@20 -- # val= 00:05:30.468 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:30.468 14:10:11 -- accel/accel.sh@20 -- # val=software 00:05:30.468 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.468 14:10:11 -- accel/accel.sh@22 -- # accel_module=software 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:30.468 14:10:11 -- accel/accel.sh@20 -- # val=32 00:05:30.468 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:30.468 14:10:11 -- accel/accel.sh@20 -- # val=32 00:05:30.468 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:30.468 14:10:11 -- accel/accel.sh@20 -- # val=1 00:05:30.468 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:30.468 14:10:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.468 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:30.468 14:10:11 -- accel/accel.sh@20 -- # val=Yes 00:05:30.468 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:30.468 14:10:11 -- accel/accel.sh@20 -- # val= 00:05:30.468 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:30.468 14:10:11 -- accel/accel.sh@20 -- # val= 00:05:30.468 14:10:11 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # IFS=: 00:05:30.468 14:10:11 -- accel/accel.sh@19 -- # read -r var val 00:05:31.429 14:10:12 -- accel/accel.sh@20 -- # val= 00:05:31.429 14:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.429 14:10:12 -- accel/accel.sh@19 -- # IFS=: 00:05:31.429 14:10:12 -- accel/accel.sh@19 -- # read -r var val 00:05:31.429 14:10:12 -- accel/accel.sh@20 -- # val= 00:05:31.429 14:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.429 14:10:12 -- accel/accel.sh@19 -- # IFS=: 00:05:31.429 14:10:12 -- accel/accel.sh@19 -- # read -r var val 00:05:31.429 14:10:12 -- accel/accel.sh@20 -- # val= 00:05:31.429 14:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.429 14:10:12 -- accel/accel.sh@19 -- # IFS=: 00:05:31.429 14:10:12 -- accel/accel.sh@19 -- # read -r var val 00:05:31.429 14:10:12 -- accel/accel.sh@20 -- # val= 00:05:31.429 14:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.429 14:10:12 -- accel/accel.sh@19 -- # IFS=: 00:05:31.429 14:10:12 -- accel/accel.sh@19 -- # read -r var val 00:05:31.429 14:10:12 -- accel/accel.sh@20 -- # val= 00:05:31.429 14:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.429 14:10:12 -- accel/accel.sh@19 -- # IFS=: 00:05:31.429 14:10:12 -- accel/accel.sh@19 -- # read -r var val 00:05:31.429 14:10:12 -- accel/accel.sh@20 -- # val= 00:05:31.429 14:10:12 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.429 14:10:12 -- accel/accel.sh@19 -- # IFS=: 00:05:31.429 14:10:12 -- accel/accel.sh@19 -- # read -r var val 00:05:31.429 14:10:12 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.429 14:10:12 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:31.429 14:10:12 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.429 00:05:31.429 real 0m1.410s 00:05:31.429 user 0m1.284s 00:05:31.429 sys 0m0.126s 00:05:31.429 14:10:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:31.429 14:10:12 -- common/autotest_common.sh@10 -- # set +x 00:05:31.429 ************************************ 00:05:31.429 END TEST accel_compare 00:05:31.429 ************************************ 00:05:31.429 14:10:12 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:31.429 14:10:12 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:31.429 14:10:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.429 14:10:12 -- common/autotest_common.sh@10 -- # set +x 00:05:31.688 ************************************ 00:05:31.688 START TEST accel_xor 00:05:31.688 ************************************ 00:05:31.688 14:10:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:05:31.688 14:10:13 -- accel/accel.sh@16 -- # local accel_opc 00:05:31.688 14:10:13 -- accel/accel.sh@17 -- # local accel_module 00:05:31.688 14:10:13 -- accel/accel.sh@19 -- # IFS=: 00:05:31.688 14:10:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:31.688 14:10:13 -- accel/accel.sh@19 -- # read -r var val 00:05:31.688 14:10:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:31.688 14:10:13 -- accel/accel.sh@12 -- # build_accel_config 00:05:31.688 14:10:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.688 14:10:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.688 14:10:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.688 14:10:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.688 14:10:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.688 14:10:13 -- accel/accel.sh@40 -- # local IFS=, 00:05:31.688 14:10:13 -- accel/accel.sh@41 -- # jq -r . 00:05:31.688 [2024-04-26 14:10:13.088522] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:31.688 [2024-04-26 14:10:13.088595] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3069809 ] 00:05:31.688 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.688 [2024-04-26 14:10:13.148100] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.946 [2024-04-26 14:10:13.265571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.946 14:10:13 -- accel/accel.sh@20 -- # val= 00:05:31.946 14:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # IFS=: 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # read -r var val 00:05:31.946 14:10:13 -- accel/accel.sh@20 -- # val= 00:05:31.946 14:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # IFS=: 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # read -r var val 00:05:31.946 14:10:13 -- accel/accel.sh@20 -- # val=0x1 00:05:31.946 14:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # IFS=: 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # read -r var val 00:05:31.946 14:10:13 -- accel/accel.sh@20 -- # val= 00:05:31.946 14:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # IFS=: 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # read -r var val 00:05:31.946 14:10:13 -- accel/accel.sh@20 -- # val= 00:05:31.946 14:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # IFS=: 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # read -r var val 00:05:31.946 14:10:13 -- accel/accel.sh@20 -- # val=xor 00:05:31.946 14:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.946 14:10:13 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # IFS=: 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # read -r var val 00:05:31.946 14:10:13 -- accel/accel.sh@20 -- # val=2 00:05:31.946 14:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # IFS=: 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # read -r var val 00:05:31.946 14:10:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.946 14:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # IFS=: 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # read -r var val 00:05:31.946 14:10:13 -- accel/accel.sh@20 -- # val= 00:05:31.946 14:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # IFS=: 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # read -r var val 00:05:31.946 14:10:13 -- accel/accel.sh@20 -- # val=software 00:05:31.946 14:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.946 14:10:13 -- accel/accel.sh@22 -- # accel_module=software 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # IFS=: 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # read -r var val 00:05:31.946 14:10:13 -- accel/accel.sh@20 -- # val=32 00:05:31.946 14:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # IFS=: 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # read -r var val 00:05:31.946 14:10:13 -- accel/accel.sh@20 -- # val=32 00:05:31.946 14:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # IFS=: 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # read -r var val 00:05:31.946 14:10:13 -- accel/accel.sh@20 -- # val=1 00:05:31.946 14:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # IFS=: 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # read -r var val 00:05:31.946 14:10:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.946 14:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # IFS=: 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # read -r var val 00:05:31.946 14:10:13 -- accel/accel.sh@20 -- # val=Yes 00:05:31.946 14:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # IFS=: 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # read -r var val 00:05:31.946 14:10:13 -- accel/accel.sh@20 -- # val= 00:05:31.946 14:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # IFS=: 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # read -r var val 00:05:31.946 14:10:13 -- accel/accel.sh@20 -- # val= 00:05:31.946 14:10:13 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # IFS=: 00:05:31.946 14:10:13 -- accel/accel.sh@19 -- # read -r var val 00:05:33.319 14:10:14 -- accel/accel.sh@20 -- # val= 00:05:33.319 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.319 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.319 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.319 14:10:14 -- accel/accel.sh@20 -- # val= 00:05:33.319 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.319 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.319 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.319 14:10:14 -- accel/accel.sh@20 -- # val= 00:05:33.319 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.319 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.319 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.319 14:10:14 -- accel/accel.sh@20 -- # val= 00:05:33.320 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.320 14:10:14 -- accel/accel.sh@20 -- # val= 00:05:33.320 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.320 14:10:14 -- accel/accel.sh@20 -- # val= 00:05:33.320 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.320 14:10:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.320 14:10:14 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:33.320 14:10:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.320 00:05:33.320 real 0m1.412s 00:05:33.320 user 0m1.284s 00:05:33.320 sys 0m0.131s 00:05:33.320 14:10:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:33.320 14:10:14 -- common/autotest_common.sh@10 -- # set +x 00:05:33.320 ************************************ 00:05:33.320 END TEST accel_xor 00:05:33.320 ************************************ 00:05:33.320 14:10:14 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:33.320 14:10:14 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:33.320 14:10:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.320 14:10:14 -- common/autotest_common.sh@10 -- # set +x 00:05:33.320 ************************************ 00:05:33.320 START TEST accel_xor 00:05:33.320 ************************************ 00:05:33.320 14:10:14 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:05:33.320 14:10:14 -- accel/accel.sh@16 -- # local accel_opc 00:05:33.320 14:10:14 -- accel/accel.sh@17 -- # local accel_module 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.320 14:10:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:33.320 14:10:14 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.320 14:10:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.320 14:10:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.320 14:10:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.320 14:10:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.320 14:10:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.320 14:10:14 -- accel/accel.sh@40 -- # local IFS=, 00:05:33.320 14:10:14 -- accel/accel.sh@41 -- # jq -r . 00:05:33.320 [2024-04-26 14:10:14.630268] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:33.320 [2024-04-26 14:10:14.630338] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3069948 ] 00:05:33.320 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.320 [2024-04-26 14:10:14.688690] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.320 [2024-04-26 14:10:14.806186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.320 14:10:14 -- accel/accel.sh@20 -- # val= 00:05:33.320 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.320 14:10:14 -- accel/accel.sh@20 -- # val= 00:05:33.320 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.320 14:10:14 -- accel/accel.sh@20 -- # val=0x1 00:05:33.320 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.320 14:10:14 -- accel/accel.sh@20 -- # val= 00:05:33.320 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.320 14:10:14 -- accel/accel.sh@20 -- # val= 00:05:33.320 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.320 14:10:14 -- accel/accel.sh@20 -- # val=xor 00:05:33.320 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.320 14:10:14 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.320 14:10:14 -- accel/accel.sh@20 -- # val=3 00:05:33.320 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.320 14:10:14 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.320 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.320 14:10:14 -- accel/accel.sh@20 -- # val= 00:05:33.320 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.320 14:10:14 -- accel/accel.sh@20 -- # val=software 00:05:33.320 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.320 14:10:14 -- accel/accel.sh@22 -- # accel_module=software 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.320 14:10:14 -- accel/accel.sh@20 -- # val=32 00:05:33.320 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.320 14:10:14 -- accel/accel.sh@20 -- # val=32 00:05:33.320 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.320 14:10:14 -- accel/accel.sh@20 -- # val=1 00:05:33.320 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.320 14:10:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.320 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.320 14:10:14 -- accel/accel.sh@20 -- # val=Yes 00:05:33.320 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.320 14:10:14 -- accel/accel.sh@20 -- # val= 00:05:33.320 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:33.320 14:10:14 -- accel/accel.sh@20 -- # val= 00:05:33.320 14:10:14 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # IFS=: 00:05:33.320 14:10:14 -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 14:10:16 -- accel/accel.sh@20 -- # val= 00:05:34.748 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 14:10:16 -- accel/accel.sh@20 -- # val= 00:05:34.748 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 14:10:16 -- accel/accel.sh@20 -- # val= 00:05:34.748 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 14:10:16 -- accel/accel.sh@20 -- # val= 00:05:34.748 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 14:10:16 -- accel/accel.sh@20 -- # val= 00:05:34.748 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 14:10:16 -- accel/accel.sh@20 -- # val= 00:05:34.748 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.748 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 14:10:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.748 14:10:16 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:34.748 14:10:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.748 00:05:34.748 real 0m1.409s 00:05:34.748 user 0m1.276s 00:05:34.748 sys 0m0.135s 00:05:34.748 14:10:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:34.748 14:10:16 -- common/autotest_common.sh@10 -- # set +x 00:05:34.748 ************************************ 00:05:34.748 END TEST accel_xor 00:05:34.748 ************************************ 00:05:34.748 14:10:16 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:34.748 14:10:16 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:34.748 14:10:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.748 14:10:16 -- common/autotest_common.sh@10 -- # set +x 00:05:34.748 ************************************ 00:05:34.748 START TEST accel_dif_verify 00:05:34.748 ************************************ 00:05:34.748 14:10:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:05:34.748 14:10:16 -- accel/accel.sh@16 -- # local accel_opc 00:05:34.748 14:10:16 -- accel/accel.sh@17 -- # local accel_module 00:05:34.748 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:34.748 14:10:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:34.748 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:34.748 14:10:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:34.748 14:10:16 -- accel/accel.sh@12 -- # build_accel_config 00:05:34.748 14:10:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.748 14:10:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.748 14:10:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.748 14:10:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.748 14:10:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.748 14:10:16 -- accel/accel.sh@40 -- # local IFS=, 00:05:34.748 14:10:16 -- accel/accel.sh@41 -- # jq -r . 00:05:34.748 [2024-04-26 14:10:16.178023] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:34.748 [2024-04-26 14:10:16.178100] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070083 ] 00:05:34.748 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.748 [2024-04-26 14:10:16.237903] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.007 [2024-04-26 14:10:16.355838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.007 14:10:16 -- accel/accel.sh@20 -- # val= 00:05:35.007 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:35.007 14:10:16 -- accel/accel.sh@20 -- # val= 00:05:35.007 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:35.007 14:10:16 -- accel/accel.sh@20 -- # val=0x1 00:05:35.007 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:35.007 14:10:16 -- accel/accel.sh@20 -- # val= 00:05:35.007 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:35.007 14:10:16 -- accel/accel.sh@20 -- # val= 00:05:35.007 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:35.007 14:10:16 -- accel/accel.sh@20 -- # val=dif_verify 00:05:35.007 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.007 14:10:16 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:35.007 14:10:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.007 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:35.007 14:10:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.007 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:35.007 14:10:16 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:35.007 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:35.007 14:10:16 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:35.007 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:35.007 14:10:16 -- accel/accel.sh@20 -- # val= 00:05:35.007 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:35.007 14:10:16 -- accel/accel.sh@20 -- # val=software 00:05:35.007 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.007 14:10:16 -- accel/accel.sh@22 -- # accel_module=software 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:35.007 14:10:16 -- accel/accel.sh@20 -- # val=32 00:05:35.007 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:35.007 14:10:16 -- accel/accel.sh@20 -- # val=32 00:05:35.007 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:35.007 14:10:16 -- accel/accel.sh@20 -- # val=1 00:05:35.007 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:35.007 14:10:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.007 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:35.007 14:10:16 -- accel/accel.sh@20 -- # val=No 00:05:35.007 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:35.007 14:10:16 -- accel/accel.sh@20 -- # val= 00:05:35.007 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:35.007 14:10:16 -- accel/accel.sh@20 -- # val= 00:05:35.007 14:10:16 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # IFS=: 00:05:35.007 14:10:16 -- accel/accel.sh@19 -- # read -r var val 00:05:36.380 14:10:17 -- accel/accel.sh@20 -- # val= 00:05:36.380 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.380 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.380 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.380 14:10:17 -- accel/accel.sh@20 -- # val= 00:05:36.380 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.380 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.380 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.380 14:10:17 -- accel/accel.sh@20 -- # val= 00:05:36.380 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.380 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.380 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.380 14:10:17 -- accel/accel.sh@20 -- # val= 00:05:36.380 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.380 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.380 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.380 14:10:17 -- accel/accel.sh@20 -- # val= 00:05:36.380 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.380 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.380 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.380 14:10:17 -- accel/accel.sh@20 -- # val= 00:05:36.380 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.380 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.380 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.380 14:10:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.380 14:10:17 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:36.380 14:10:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.380 00:05:36.380 real 0m1.414s 00:05:36.380 user 0m1.289s 00:05:36.380 sys 0m0.128s 00:05:36.380 14:10:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:36.380 14:10:17 -- common/autotest_common.sh@10 -- # set +x 00:05:36.380 ************************************ 00:05:36.380 END TEST accel_dif_verify 00:05:36.380 ************************************ 00:05:36.380 14:10:17 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:36.380 14:10:17 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:36.380 14:10:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.380 14:10:17 -- common/autotest_common.sh@10 -- # set +x 00:05:36.380 ************************************ 00:05:36.380 START TEST accel_dif_generate 00:05:36.380 ************************************ 00:05:36.380 14:10:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:05:36.380 14:10:17 -- accel/accel.sh@16 -- # local accel_opc 00:05:36.380 14:10:17 -- accel/accel.sh@17 -- # local accel_module 00:05:36.380 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.380 14:10:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:36.380 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.380 14:10:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:36.380 14:10:17 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.380 14:10:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.380 14:10:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.380 14:10:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.380 14:10:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.380 14:10:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.380 14:10:17 -- accel/accel.sh@40 -- # local IFS=, 00:05:36.380 14:10:17 -- accel/accel.sh@41 -- # jq -r . 00:05:36.380 [2024-04-26 14:10:17.730307] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:36.380 [2024-04-26 14:10:17.730385] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070295 ] 00:05:36.380 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.380 [2024-04-26 14:10:17.789637] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.380 [2024-04-26 14:10:17.906856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.639 14:10:17 -- accel/accel.sh@20 -- # val= 00:05:36.639 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.639 14:10:17 -- accel/accel.sh@20 -- # val= 00:05:36.639 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.639 14:10:17 -- accel/accel.sh@20 -- # val=0x1 00:05:36.639 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.639 14:10:17 -- accel/accel.sh@20 -- # val= 00:05:36.639 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.639 14:10:17 -- accel/accel.sh@20 -- # val= 00:05:36.639 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.639 14:10:17 -- accel/accel.sh@20 -- # val=dif_generate 00:05:36.639 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.639 14:10:17 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.639 14:10:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.639 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.639 14:10:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.639 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.639 14:10:17 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:36.639 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.639 14:10:17 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:36.639 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.639 14:10:17 -- accel/accel.sh@20 -- # val= 00:05:36.639 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.639 14:10:17 -- accel/accel.sh@20 -- # val=software 00:05:36.639 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.639 14:10:17 -- accel/accel.sh@22 -- # accel_module=software 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.639 14:10:17 -- accel/accel.sh@20 -- # val=32 00:05:36.639 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.639 14:10:17 -- accel/accel.sh@20 -- # val=32 00:05:36.639 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.639 14:10:17 -- accel/accel.sh@20 -- # val=1 00:05:36.639 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.639 14:10:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.639 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.639 14:10:17 -- accel/accel.sh@20 -- # val=No 00:05:36.639 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.639 14:10:17 -- accel/accel.sh@20 -- # val= 00:05:36.639 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:36.639 14:10:17 -- accel/accel.sh@20 -- # val= 00:05:36.639 14:10:17 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # IFS=: 00:05:36.639 14:10:17 -- accel/accel.sh@19 -- # read -r var val 00:05:37.574 14:10:19 -- accel/accel.sh@20 -- # val= 00:05:37.574 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.574 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:37.574 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:37.574 14:10:19 -- accel/accel.sh@20 -- # val= 00:05:37.574 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.574 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:37.574 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:37.574 14:10:19 -- accel/accel.sh@20 -- # val= 00:05:37.574 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.574 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:37.574 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:37.574 14:10:19 -- accel/accel.sh@20 -- # val= 00:05:37.574 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.574 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:37.574 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:37.574 14:10:19 -- accel/accel.sh@20 -- # val= 00:05:37.574 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.574 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:37.574 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:37.574 14:10:19 -- accel/accel.sh@20 -- # val= 00:05:37.574 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.574 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:37.574 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:37.574 14:10:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.574 14:10:19 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:37.574 14:10:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.574 00:05:37.574 real 0m1.412s 00:05:37.574 user 0m1.287s 00:05:37.574 sys 0m0.128s 00:05:37.574 14:10:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:37.574 14:10:19 -- common/autotest_common.sh@10 -- # set +x 00:05:37.574 ************************************ 00:05:37.574 END TEST accel_dif_generate 00:05:37.574 ************************************ 00:05:37.833 14:10:19 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:37.833 14:10:19 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:37.833 14:10:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.833 14:10:19 -- common/autotest_common.sh@10 -- # set +x 00:05:37.833 ************************************ 00:05:37.833 START TEST accel_dif_generate_copy 00:05:37.833 ************************************ 00:05:37.833 14:10:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:05:37.833 14:10:19 -- accel/accel.sh@16 -- # local accel_opc 00:05:37.833 14:10:19 -- accel/accel.sh@17 -- # local accel_module 00:05:37.833 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:37.833 14:10:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:37.833 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:37.833 14:10:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:37.833 14:10:19 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.833 14:10:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.833 14:10:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.833 14:10:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.833 14:10:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.833 14:10:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.833 14:10:19 -- accel/accel.sh@40 -- # local IFS=, 00:05:37.833 14:10:19 -- accel/accel.sh@41 -- # jq -r . 00:05:37.833 [2024-04-26 14:10:19.273878] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:37.833 [2024-04-26 14:10:19.273949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070429 ] 00:05:37.833 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.833 [2024-04-26 14:10:19.332171] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.091 [2024-04-26 14:10:19.450165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.091 14:10:19 -- accel/accel.sh@20 -- # val= 00:05:38.091 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:38.091 14:10:19 -- accel/accel.sh@20 -- # val= 00:05:38.091 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:38.091 14:10:19 -- accel/accel.sh@20 -- # val=0x1 00:05:38.091 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:38.091 14:10:19 -- accel/accel.sh@20 -- # val= 00:05:38.091 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:38.091 14:10:19 -- accel/accel.sh@20 -- # val= 00:05:38.091 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:38.091 14:10:19 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:38.091 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.091 14:10:19 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:38.091 14:10:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.091 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:38.091 14:10:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.091 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:38.091 14:10:19 -- accel/accel.sh@20 -- # val= 00:05:38.091 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:38.091 14:10:19 -- accel/accel.sh@20 -- # val=software 00:05:38.091 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.091 14:10:19 -- accel/accel.sh@22 -- # accel_module=software 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:38.091 14:10:19 -- accel/accel.sh@20 -- # val=32 00:05:38.091 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:38.091 14:10:19 -- accel/accel.sh@20 -- # val=32 00:05:38.091 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:38.091 14:10:19 -- accel/accel.sh@20 -- # val=1 00:05:38.091 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:38.091 14:10:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.091 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:38.091 14:10:19 -- accel/accel.sh@20 -- # val=No 00:05:38.091 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:38.091 14:10:19 -- accel/accel.sh@20 -- # val= 00:05:38.091 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:38.091 14:10:19 -- accel/accel.sh@20 -- # val= 00:05:38.091 14:10:19 -- accel/accel.sh@21 -- # case "$var" in 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # IFS=: 00:05:38.091 14:10:19 -- accel/accel.sh@19 -- # read -r var val 00:05:39.464 14:10:20 -- accel/accel.sh@20 -- # val= 00:05:39.464 14:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.464 14:10:20 -- accel/accel.sh@19 -- # IFS=: 00:05:39.464 14:10:20 -- accel/accel.sh@19 -- # read -r var val 00:05:39.464 14:10:20 -- accel/accel.sh@20 -- # val= 00:05:39.464 14:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.464 14:10:20 -- accel/accel.sh@19 -- # IFS=: 00:05:39.464 14:10:20 -- accel/accel.sh@19 -- # read -r var val 00:05:39.464 14:10:20 -- accel/accel.sh@20 -- # val= 00:05:39.464 14:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.464 14:10:20 -- accel/accel.sh@19 -- # IFS=: 00:05:39.464 14:10:20 -- accel/accel.sh@19 -- # read -r var val 00:05:39.464 14:10:20 -- accel/accel.sh@20 -- # val= 00:05:39.464 14:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.464 14:10:20 -- accel/accel.sh@19 -- # IFS=: 00:05:39.464 14:10:20 -- accel/accel.sh@19 -- # read -r var val 00:05:39.464 14:10:20 -- accel/accel.sh@20 -- # val= 00:05:39.464 14:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.464 14:10:20 -- accel/accel.sh@19 -- # IFS=: 00:05:39.464 14:10:20 -- accel/accel.sh@19 -- # read -r var val 00:05:39.464 14:10:20 -- accel/accel.sh@20 -- # val= 00:05:39.464 14:10:20 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.464 14:10:20 -- accel/accel.sh@19 -- # IFS=: 00:05:39.464 14:10:20 -- accel/accel.sh@19 -- # read -r var val 00:05:39.464 14:10:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.464 14:10:20 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:39.464 14:10:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.464 00:05:39.464 real 0m1.410s 00:05:39.464 user 0m1.290s 00:05:39.464 sys 0m0.120s 00:05:39.464 14:10:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:39.464 14:10:20 -- common/autotest_common.sh@10 -- # set +x 00:05:39.464 ************************************ 00:05:39.464 END TEST accel_dif_generate_copy 00:05:39.464 ************************************ 00:05:39.464 14:10:20 -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:39.464 14:10:20 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:39.465 14:10:20 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:39.465 14:10:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.465 14:10:20 -- common/autotest_common.sh@10 -- # set +x 00:05:39.465 ************************************ 00:05:39.465 START TEST accel_comp 00:05:39.465 ************************************ 00:05:39.465 14:10:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:39.465 14:10:20 -- accel/accel.sh@16 -- # local accel_opc 00:05:39.465 14:10:20 -- accel/accel.sh@17 -- # local accel_module 00:05:39.465 14:10:20 -- accel/accel.sh@19 -- # IFS=: 00:05:39.465 14:10:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:39.465 14:10:20 -- accel/accel.sh@19 -- # read -r var val 00:05:39.465 14:10:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:39.465 14:10:20 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.465 14:10:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.465 14:10:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.465 14:10:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.465 14:10:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.465 14:10:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.465 14:10:20 -- accel/accel.sh@40 -- # local IFS=, 00:05:39.465 14:10:20 -- accel/accel.sh@41 -- # jq -r . 00:05:39.465 [2024-04-26 14:10:20.813122] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:39.465 [2024-04-26 14:10:20.813191] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070645 ] 00:05:39.465 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.465 [2024-04-26 14:10:20.872143] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.465 [2024-04-26 14:10:20.989605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.723 14:10:21 -- accel/accel.sh@20 -- # val= 00:05:39.723 14:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # IFS=: 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # read -r var val 00:05:39.723 14:10:21 -- accel/accel.sh@20 -- # val= 00:05:39.723 14:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # IFS=: 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # read -r var val 00:05:39.723 14:10:21 -- accel/accel.sh@20 -- # val= 00:05:39.723 14:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # IFS=: 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # read -r var val 00:05:39.723 14:10:21 -- accel/accel.sh@20 -- # val=0x1 00:05:39.723 14:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # IFS=: 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # read -r var val 00:05:39.723 14:10:21 -- accel/accel.sh@20 -- # val= 00:05:39.723 14:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # IFS=: 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # read -r var val 00:05:39.723 14:10:21 -- accel/accel.sh@20 -- # val= 00:05:39.723 14:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # IFS=: 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # read -r var val 00:05:39.723 14:10:21 -- accel/accel.sh@20 -- # val=compress 00:05:39.723 14:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.723 14:10:21 -- accel/accel.sh@23 -- # accel_opc=compress 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # IFS=: 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # read -r var val 00:05:39.723 14:10:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.723 14:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # IFS=: 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # read -r var val 00:05:39.723 14:10:21 -- accel/accel.sh@20 -- # val= 00:05:39.723 14:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # IFS=: 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # read -r var val 00:05:39.723 14:10:21 -- accel/accel.sh@20 -- # val=software 00:05:39.723 14:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.723 14:10:21 -- accel/accel.sh@22 -- # accel_module=software 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # IFS=: 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # read -r var val 00:05:39.723 14:10:21 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:39.723 14:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # IFS=: 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # read -r var val 00:05:39.723 14:10:21 -- accel/accel.sh@20 -- # val=32 00:05:39.723 14:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # IFS=: 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # read -r var val 00:05:39.723 14:10:21 -- accel/accel.sh@20 -- # val=32 00:05:39.723 14:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # IFS=: 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # read -r var val 00:05:39.723 14:10:21 -- accel/accel.sh@20 -- # val=1 00:05:39.723 14:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # IFS=: 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # read -r var val 00:05:39.723 14:10:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.723 14:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # IFS=: 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # read -r var val 00:05:39.723 14:10:21 -- accel/accel.sh@20 -- # val=No 00:05:39.723 14:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # IFS=: 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # read -r var val 00:05:39.723 14:10:21 -- accel/accel.sh@20 -- # val= 00:05:39.723 14:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # IFS=: 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # read -r var val 00:05:39.723 14:10:21 -- accel/accel.sh@20 -- # val= 00:05:39.723 14:10:21 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # IFS=: 00:05:39.723 14:10:21 -- accel/accel.sh@19 -- # read -r var val 00:05:40.656 14:10:22 -- accel/accel.sh@20 -- # val= 00:05:40.656 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.656 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:40.656 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:40.657 14:10:22 -- accel/accel.sh@20 -- # val= 00:05:40.657 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.657 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:40.657 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:40.657 14:10:22 -- accel/accel.sh@20 -- # val= 00:05:40.657 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.657 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:40.657 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:40.657 14:10:22 -- accel/accel.sh@20 -- # val= 00:05:40.657 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.657 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:40.657 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:40.657 14:10:22 -- accel/accel.sh@20 -- # val= 00:05:40.657 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.657 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:40.657 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:40.657 14:10:22 -- accel/accel.sh@20 -- # val= 00:05:40.657 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.657 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:40.657 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:40.657 14:10:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.657 14:10:22 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:40.657 14:10:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.657 00:05:40.657 real 0m1.413s 00:05:40.657 user 0m1.280s 00:05:40.657 sys 0m0.135s 00:05:40.657 14:10:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:40.657 14:10:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.657 ************************************ 00:05:40.657 END TEST accel_comp 00:05:40.657 ************************************ 00:05:40.915 14:10:22 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:40.915 14:10:22 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:40.915 14:10:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.915 14:10:22 -- common/autotest_common.sh@10 -- # set +x 00:05:40.915 ************************************ 00:05:40.915 START TEST accel_decomp 00:05:40.915 ************************************ 00:05:40.915 14:10:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:40.915 14:10:22 -- accel/accel.sh@16 -- # local accel_opc 00:05:40.915 14:10:22 -- accel/accel.sh@17 -- # local accel_module 00:05:40.915 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:40.915 14:10:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:40.915 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:40.915 14:10:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:40.915 14:10:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.915 14:10:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.915 14:10:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.915 14:10:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.915 14:10:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.915 14:10:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.915 14:10:22 -- accel/accel.sh@40 -- # local IFS=, 00:05:40.915 14:10:22 -- accel/accel.sh@41 -- # jq -r . 00:05:40.915 [2024-04-26 14:10:22.357870] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:40.916 [2024-04-26 14:10:22.357937] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070776 ] 00:05:40.916 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.916 [2024-04-26 14:10:22.416536] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.174 [2024-04-26 14:10:22.530755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.174 14:10:22 -- accel/accel.sh@20 -- # val= 00:05:41.174 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:41.174 14:10:22 -- accel/accel.sh@20 -- # val= 00:05:41.174 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:41.174 14:10:22 -- accel/accel.sh@20 -- # val= 00:05:41.174 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:41.174 14:10:22 -- accel/accel.sh@20 -- # val=0x1 00:05:41.174 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:41.174 14:10:22 -- accel/accel.sh@20 -- # val= 00:05:41.174 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:41.174 14:10:22 -- accel/accel.sh@20 -- # val= 00:05:41.174 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:41.174 14:10:22 -- accel/accel.sh@20 -- # val=decompress 00:05:41.174 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.174 14:10:22 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:41.174 14:10:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.174 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:41.174 14:10:22 -- accel/accel.sh@20 -- # val= 00:05:41.174 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:41.174 14:10:22 -- accel/accel.sh@20 -- # val=software 00:05:41.174 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.174 14:10:22 -- accel/accel.sh@22 -- # accel_module=software 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:41.174 14:10:22 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:41.174 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:41.174 14:10:22 -- accel/accel.sh@20 -- # val=32 00:05:41.174 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:41.174 14:10:22 -- accel/accel.sh@20 -- # val=32 00:05:41.174 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:41.174 14:10:22 -- accel/accel.sh@20 -- # val=1 00:05:41.174 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:41.174 14:10:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.174 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:41.174 14:10:22 -- accel/accel.sh@20 -- # val=Yes 00:05:41.174 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:41.174 14:10:22 -- accel/accel.sh@20 -- # val= 00:05:41.174 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:41.174 14:10:22 -- accel/accel.sh@20 -- # val= 00:05:41.174 14:10:22 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # IFS=: 00:05:41.174 14:10:22 -- accel/accel.sh@19 -- # read -r var val 00:05:42.548 14:10:23 -- accel/accel.sh@20 -- # val= 00:05:42.548 14:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.548 14:10:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.548 14:10:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.548 14:10:23 -- accel/accel.sh@20 -- # val= 00:05:42.548 14:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.548 14:10:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.548 14:10:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.548 14:10:23 -- accel/accel.sh@20 -- # val= 00:05:42.548 14:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.548 14:10:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.548 14:10:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.548 14:10:23 -- accel/accel.sh@20 -- # val= 00:05:42.548 14:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.548 14:10:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.548 14:10:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.548 14:10:23 -- accel/accel.sh@20 -- # val= 00:05:42.548 14:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.548 14:10:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.548 14:10:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.548 14:10:23 -- accel/accel.sh@20 -- # val= 00:05:42.548 14:10:23 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.548 14:10:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.548 14:10:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.548 14:10:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.548 14:10:23 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:42.548 14:10:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.548 00:05:42.548 real 0m1.407s 00:05:42.548 user 0m1.286s 00:05:42.548 sys 0m0.123s 00:05:42.548 14:10:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:42.548 14:10:23 -- common/autotest_common.sh@10 -- # set +x 00:05:42.548 ************************************ 00:05:42.548 END TEST accel_decomp 00:05:42.548 ************************************ 00:05:42.548 14:10:23 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:42.548 14:10:23 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:42.548 14:10:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.548 14:10:23 -- common/autotest_common.sh@10 -- # set +x 00:05:42.548 ************************************ 00:05:42.548 START TEST accel_decmop_full 00:05:42.548 ************************************ 00:05:42.548 14:10:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:42.548 14:10:23 -- accel/accel.sh@16 -- # local accel_opc 00:05:42.548 14:10:23 -- accel/accel.sh@17 -- # local accel_module 00:05:42.548 14:10:23 -- accel/accel.sh@19 -- # IFS=: 00:05:42.548 14:10:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:42.548 14:10:23 -- accel/accel.sh@19 -- # read -r var val 00:05:42.548 14:10:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:42.548 14:10:23 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.548 14:10:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.548 14:10:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.548 14:10:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.548 14:10:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.548 14:10:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.548 14:10:23 -- accel/accel.sh@40 -- # local IFS=, 00:05:42.548 14:10:23 -- accel/accel.sh@41 -- # jq -r . 00:05:42.549 [2024-04-26 14:10:23.902259] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:42.549 [2024-04-26 14:10:23.902329] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070917 ] 00:05:42.549 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.549 [2024-04-26 14:10:23.961744] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.549 [2024-04-26 14:10:24.079585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.807 14:10:24 -- accel/accel.sh@20 -- # val= 00:05:42.807 14:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # IFS=: 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # read -r var val 00:05:42.807 14:10:24 -- accel/accel.sh@20 -- # val= 00:05:42.807 14:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # IFS=: 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # read -r var val 00:05:42.807 14:10:24 -- accel/accel.sh@20 -- # val= 00:05:42.807 14:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # IFS=: 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # read -r var val 00:05:42.807 14:10:24 -- accel/accel.sh@20 -- # val=0x1 00:05:42.807 14:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # IFS=: 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # read -r var val 00:05:42.807 14:10:24 -- accel/accel.sh@20 -- # val= 00:05:42.807 14:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # IFS=: 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # read -r var val 00:05:42.807 14:10:24 -- accel/accel.sh@20 -- # val= 00:05:42.807 14:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # IFS=: 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # read -r var val 00:05:42.807 14:10:24 -- accel/accel.sh@20 -- # val=decompress 00:05:42.807 14:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.807 14:10:24 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # IFS=: 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # read -r var val 00:05:42.807 14:10:24 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:42.807 14:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # IFS=: 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # read -r var val 00:05:42.807 14:10:24 -- accel/accel.sh@20 -- # val= 00:05:42.807 14:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # IFS=: 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # read -r var val 00:05:42.807 14:10:24 -- accel/accel.sh@20 -- # val=software 00:05:42.807 14:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.807 14:10:24 -- accel/accel.sh@22 -- # accel_module=software 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # IFS=: 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # read -r var val 00:05:42.807 14:10:24 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:42.807 14:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # IFS=: 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # read -r var val 00:05:42.807 14:10:24 -- accel/accel.sh@20 -- # val=32 00:05:42.807 14:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # IFS=: 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # read -r var val 00:05:42.807 14:10:24 -- accel/accel.sh@20 -- # val=32 00:05:42.807 14:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # IFS=: 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # read -r var val 00:05:42.807 14:10:24 -- accel/accel.sh@20 -- # val=1 00:05:42.807 14:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # IFS=: 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # read -r var val 00:05:42.807 14:10:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.807 14:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # IFS=: 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # read -r var val 00:05:42.807 14:10:24 -- accel/accel.sh@20 -- # val=Yes 00:05:42.807 14:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # IFS=: 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # read -r var val 00:05:42.807 14:10:24 -- accel/accel.sh@20 -- # val= 00:05:42.807 14:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # IFS=: 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # read -r var val 00:05:42.807 14:10:24 -- accel/accel.sh@20 -- # val= 00:05:42.807 14:10:24 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # IFS=: 00:05:42.807 14:10:24 -- accel/accel.sh@19 -- # read -r var val 00:05:44.181 14:10:25 -- accel/accel.sh@20 -- # val= 00:05:44.181 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.181 14:10:25 -- accel/accel.sh@20 -- # val= 00:05:44.181 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.181 14:10:25 -- accel/accel.sh@20 -- # val= 00:05:44.181 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.181 14:10:25 -- accel/accel.sh@20 -- # val= 00:05:44.181 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.181 14:10:25 -- accel/accel.sh@20 -- # val= 00:05:44.181 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.181 14:10:25 -- accel/accel.sh@20 -- # val= 00:05:44.181 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.181 14:10:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.181 14:10:25 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:44.181 14:10:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.181 00:05:44.181 real 0m1.433s 00:05:44.181 user 0m1.312s 00:05:44.181 sys 0m0.122s 00:05:44.181 14:10:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:44.181 14:10:25 -- common/autotest_common.sh@10 -- # set +x 00:05:44.181 ************************************ 00:05:44.181 END TEST accel_decmop_full 00:05:44.181 ************************************ 00:05:44.181 14:10:25 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:44.181 14:10:25 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:44.181 14:10:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.181 14:10:25 -- common/autotest_common.sh@10 -- # set +x 00:05:44.181 ************************************ 00:05:44.181 START TEST accel_decomp_mcore 00:05:44.181 ************************************ 00:05:44.181 14:10:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:44.181 14:10:25 -- accel/accel.sh@16 -- # local accel_opc 00:05:44.181 14:10:25 -- accel/accel.sh@17 -- # local accel_module 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.181 14:10:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.181 14:10:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:44.181 14:10:25 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.181 14:10:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.181 14:10:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.181 14:10:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.181 14:10:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.181 14:10:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.181 14:10:25 -- accel/accel.sh@40 -- # local IFS=, 00:05:44.181 14:10:25 -- accel/accel.sh@41 -- # jq -r . 00:05:44.181 [2024-04-26 14:10:25.470255] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:44.181 [2024-04-26 14:10:25.470324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071131 ] 00:05:44.181 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.181 [2024-04-26 14:10:25.529756] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.181 [2024-04-26 14:10:25.650918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.181 [2024-04-26 14:10:25.652652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.181 [2024-04-26 14:10:25.652682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.181 [2024-04-26 14:10:25.652686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.181 14:10:25 -- accel/accel.sh@20 -- # val= 00:05:44.181 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.181 14:10:25 -- accel/accel.sh@20 -- # val= 00:05:44.181 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.181 14:10:25 -- accel/accel.sh@20 -- # val= 00:05:44.181 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.181 14:10:25 -- accel/accel.sh@20 -- # val=0xf 00:05:44.181 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.181 14:10:25 -- accel/accel.sh@20 -- # val= 00:05:44.181 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.181 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.181 14:10:25 -- accel/accel.sh@20 -- # val= 00:05:44.182 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.182 14:10:25 -- accel/accel.sh@20 -- # val=decompress 00:05:44.182 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.182 14:10:25 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.182 14:10:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.182 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.182 14:10:25 -- accel/accel.sh@20 -- # val= 00:05:44.182 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.182 14:10:25 -- accel/accel.sh@20 -- # val=software 00:05:44.182 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.182 14:10:25 -- accel/accel.sh@22 -- # accel_module=software 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.182 14:10:25 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:44.182 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.182 14:10:25 -- accel/accel.sh@20 -- # val=32 00:05:44.182 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.182 14:10:25 -- accel/accel.sh@20 -- # val=32 00:05:44.182 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.182 14:10:25 -- accel/accel.sh@20 -- # val=1 00:05:44.182 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.182 14:10:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.182 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.182 14:10:25 -- accel/accel.sh@20 -- # val=Yes 00:05:44.182 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.182 14:10:25 -- accel/accel.sh@20 -- # val= 00:05:44.182 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:44.182 14:10:25 -- accel/accel.sh@20 -- # val= 00:05:44.182 14:10:25 -- accel/accel.sh@21 -- # case "$var" in 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # IFS=: 00:05:44.182 14:10:25 -- accel/accel.sh@19 -- # read -r var val 00:05:45.556 14:10:26 -- accel/accel.sh@20 -- # val= 00:05:45.556 14:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 14:10:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 14:10:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.556 14:10:26 -- accel/accel.sh@20 -- # val= 00:05:45.556 14:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 14:10:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 14:10:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.556 14:10:26 -- accel/accel.sh@20 -- # val= 00:05:45.556 14:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 14:10:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 14:10:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.557 14:10:26 -- accel/accel.sh@20 -- # val= 00:05:45.557 14:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.557 14:10:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.557 14:10:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.557 14:10:26 -- accel/accel.sh@20 -- # val= 00:05:45.557 14:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.557 14:10:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.557 14:10:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.557 14:10:26 -- accel/accel.sh@20 -- # val= 00:05:45.557 14:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.557 14:10:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.557 14:10:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.557 14:10:26 -- accel/accel.sh@20 -- # val= 00:05:45.557 14:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.557 14:10:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.557 14:10:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.557 14:10:26 -- accel/accel.sh@20 -- # val= 00:05:45.557 14:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.557 14:10:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.557 14:10:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.557 14:10:26 -- accel/accel.sh@20 -- # val= 00:05:45.557 14:10:26 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.557 14:10:26 -- accel/accel.sh@19 -- # IFS=: 00:05:45.557 14:10:26 -- accel/accel.sh@19 -- # read -r var val 00:05:45.557 14:10:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.557 14:10:26 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:45.557 14:10:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.557 00:05:45.557 real 0m1.431s 00:05:45.557 user 0m4.626s 00:05:45.557 sys 0m0.128s 00:05:45.557 14:10:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:45.557 14:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:45.557 ************************************ 00:05:45.557 END TEST accel_decomp_mcore 00:05:45.557 ************************************ 00:05:45.557 14:10:26 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:45.557 14:10:26 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:45.557 14:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.557 14:10:26 -- common/autotest_common.sh@10 -- # set +x 00:05:45.557 ************************************ 00:05:45.557 START TEST accel_decomp_full_mcore 00:05:45.557 ************************************ 00:05:45.557 14:10:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:45.557 14:10:27 -- accel/accel.sh@16 -- # local accel_opc 00:05:45.557 14:10:27 -- accel/accel.sh@17 -- # local accel_module 00:05:45.557 14:10:27 -- accel/accel.sh@19 -- # IFS=: 00:05:45.557 14:10:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:45.557 14:10:27 -- accel/accel.sh@19 -- # read -r var val 00:05:45.557 14:10:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:45.557 14:10:27 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.557 14:10:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.557 14:10:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.557 14:10:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.557 14:10:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.557 14:10:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.557 14:10:27 -- accel/accel.sh@40 -- # local IFS=, 00:05:45.557 14:10:27 -- accel/accel.sh@41 -- # jq -r . 00:05:45.557 [2024-04-26 14:10:27.045377] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:45.557 [2024-04-26 14:10:27.045456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071268 ] 00:05:45.557 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.557 [2024-04-26 14:10:27.106385] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:45.816 [2024-04-26 14:10:27.226170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.816 [2024-04-26 14:10:27.226275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.816 [2024-04-26 14:10:27.226279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.816 [2024-04-26 14:10:27.226223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.816 14:10:27 -- accel/accel.sh@20 -- # val= 00:05:45.816 14:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # IFS=: 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # read -r var val 00:05:45.816 14:10:27 -- accel/accel.sh@20 -- # val= 00:05:45.816 14:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # IFS=: 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # read -r var val 00:05:45.816 14:10:27 -- accel/accel.sh@20 -- # val= 00:05:45.816 14:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # IFS=: 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # read -r var val 00:05:45.816 14:10:27 -- accel/accel.sh@20 -- # val=0xf 00:05:45.816 14:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # IFS=: 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # read -r var val 00:05:45.816 14:10:27 -- accel/accel.sh@20 -- # val= 00:05:45.816 14:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # IFS=: 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # read -r var val 00:05:45.816 14:10:27 -- accel/accel.sh@20 -- # val= 00:05:45.816 14:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # IFS=: 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # read -r var val 00:05:45.816 14:10:27 -- accel/accel.sh@20 -- # val=decompress 00:05:45.816 14:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.816 14:10:27 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # IFS=: 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # read -r var val 00:05:45.816 14:10:27 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:45.816 14:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # IFS=: 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # read -r var val 00:05:45.816 14:10:27 -- accel/accel.sh@20 -- # val= 00:05:45.816 14:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # IFS=: 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # read -r var val 00:05:45.816 14:10:27 -- accel/accel.sh@20 -- # val=software 00:05:45.816 14:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.816 14:10:27 -- accel/accel.sh@22 -- # accel_module=software 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # IFS=: 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # read -r var val 00:05:45.816 14:10:27 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:45.816 14:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # IFS=: 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # read -r var val 00:05:45.816 14:10:27 -- accel/accel.sh@20 -- # val=32 00:05:45.816 14:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # IFS=: 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # read -r var val 00:05:45.816 14:10:27 -- accel/accel.sh@20 -- # val=32 00:05:45.816 14:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # IFS=: 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # read -r var val 00:05:45.816 14:10:27 -- accel/accel.sh@20 -- # val=1 00:05:45.816 14:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # IFS=: 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # read -r var val 00:05:45.816 14:10:27 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.816 14:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # IFS=: 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # read -r var val 00:05:45.816 14:10:27 -- accel/accel.sh@20 -- # val=Yes 00:05:45.816 14:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # IFS=: 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # read -r var val 00:05:45.816 14:10:27 -- accel/accel.sh@20 -- # val= 00:05:45.816 14:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # IFS=: 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # read -r var val 00:05:45.816 14:10:27 -- accel/accel.sh@20 -- # val= 00:05:45.816 14:10:27 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # IFS=: 00:05:45.816 14:10:27 -- accel/accel.sh@19 -- # read -r var val 00:05:47.191 14:10:28 -- accel/accel.sh@20 -- # val= 00:05:47.191 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.191 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.191 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.191 14:10:28 -- accel/accel.sh@20 -- # val= 00:05:47.191 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.191 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.191 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.191 14:10:28 -- accel/accel.sh@20 -- # val= 00:05:47.191 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.191 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.191 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.191 14:10:28 -- accel/accel.sh@20 -- # val= 00:05:47.191 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.191 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.191 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.191 14:10:28 -- accel/accel.sh@20 -- # val= 00:05:47.191 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.191 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.191 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.191 14:10:28 -- accel/accel.sh@20 -- # val= 00:05:47.191 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.191 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.191 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.191 14:10:28 -- accel/accel.sh@20 -- # val= 00:05:47.191 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.191 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.191 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.191 14:10:28 -- accel/accel.sh@20 -- # val= 00:05:47.191 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.191 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.191 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.191 14:10:28 -- accel/accel.sh@20 -- # val= 00:05:47.191 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.191 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.191 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.191 14:10:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.191 14:10:28 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:47.191 14:10:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.191 00:05:47.191 real 0m1.445s 00:05:47.191 user 0m4.697s 00:05:47.191 sys 0m0.130s 00:05:47.191 14:10:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:47.191 14:10:28 -- common/autotest_common.sh@10 -- # set +x 00:05:47.191 ************************************ 00:05:47.191 END TEST accel_decomp_full_mcore 00:05:47.191 ************************************ 00:05:47.191 14:10:28 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:47.191 14:10:28 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:05:47.191 14:10:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.191 14:10:28 -- common/autotest_common.sh@10 -- # set +x 00:05:47.191 ************************************ 00:05:47.191 START TEST accel_decomp_mthread 00:05:47.191 ************************************ 00:05:47.191 14:10:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:47.191 14:10:28 -- accel/accel.sh@16 -- # local accel_opc 00:05:47.191 14:10:28 -- accel/accel.sh@17 -- # local accel_module 00:05:47.191 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.191 14:10:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:47.191 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.191 14:10:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:47.191 14:10:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:47.191 14:10:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.191 14:10:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.191 14:10:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.191 14:10:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.191 14:10:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.191 14:10:28 -- accel/accel.sh@40 -- # local IFS=, 00:05:47.191 14:10:28 -- accel/accel.sh@41 -- # jq -r . 00:05:47.192 [2024-04-26 14:10:28.618879] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:47.192 [2024-04-26 14:10:28.618948] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071489 ] 00:05:47.192 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.192 [2024-04-26 14:10:28.679447] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.450 [2024-04-26 14:10:28.797182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.450 14:10:28 -- accel/accel.sh@20 -- # val= 00:05:47.450 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.450 14:10:28 -- accel/accel.sh@20 -- # val= 00:05:47.450 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.450 14:10:28 -- accel/accel.sh@20 -- # val= 00:05:47.450 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.450 14:10:28 -- accel/accel.sh@20 -- # val=0x1 00:05:47.450 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.450 14:10:28 -- accel/accel.sh@20 -- # val= 00:05:47.450 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.450 14:10:28 -- accel/accel.sh@20 -- # val= 00:05:47.450 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.450 14:10:28 -- accel/accel.sh@20 -- # val=decompress 00:05:47.450 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.450 14:10:28 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.450 14:10:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.450 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.450 14:10:28 -- accel/accel.sh@20 -- # val= 00:05:47.450 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.450 14:10:28 -- accel/accel.sh@20 -- # val=software 00:05:47.450 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.450 14:10:28 -- accel/accel.sh@22 -- # accel_module=software 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.450 14:10:28 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:47.450 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.450 14:10:28 -- accel/accel.sh@20 -- # val=32 00:05:47.450 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.450 14:10:28 -- accel/accel.sh@20 -- # val=32 00:05:47.450 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.450 14:10:28 -- accel/accel.sh@20 -- # val=2 00:05:47.450 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.450 14:10:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.450 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.450 14:10:28 -- accel/accel.sh@20 -- # val=Yes 00:05:47.450 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.450 14:10:28 -- accel/accel.sh@20 -- # val= 00:05:47.450 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:47.450 14:10:28 -- accel/accel.sh@20 -- # val= 00:05:47.450 14:10:28 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # IFS=: 00:05:47.450 14:10:28 -- accel/accel.sh@19 -- # read -r var val 00:05:48.823 14:10:30 -- accel/accel.sh@20 -- # val= 00:05:48.823 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.823 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:48.823 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:48.823 14:10:30 -- accel/accel.sh@20 -- # val= 00:05:48.823 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.823 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:48.823 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:48.823 14:10:30 -- accel/accel.sh@20 -- # val= 00:05:48.823 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.823 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:48.823 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:48.823 14:10:30 -- accel/accel.sh@20 -- # val= 00:05:48.823 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.823 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:48.823 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:48.823 14:10:30 -- accel/accel.sh@20 -- # val= 00:05:48.823 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.823 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:48.823 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:48.823 14:10:30 -- accel/accel.sh@20 -- # val= 00:05:48.823 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.823 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:48.823 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:48.823 14:10:30 -- accel/accel.sh@20 -- # val= 00:05:48.823 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.823 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:48.823 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:48.823 14:10:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.823 14:10:30 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:48.823 14:10:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.823 00:05:48.823 real 0m1.425s 00:05:48.823 user 0m1.298s 00:05:48.823 sys 0m0.128s 00:05:48.823 14:10:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:48.823 14:10:30 -- common/autotest_common.sh@10 -- # set +x 00:05:48.823 ************************************ 00:05:48.823 END TEST accel_decomp_mthread 00:05:48.823 ************************************ 00:05:48.823 14:10:30 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:48.823 14:10:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:48.823 14:10:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.823 14:10:30 -- common/autotest_common.sh@10 -- # set +x 00:05:48.823 ************************************ 00:05:48.823 START TEST accel_deomp_full_mthread 00:05:48.823 ************************************ 00:05:48.823 14:10:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:48.823 14:10:30 -- accel/accel.sh@16 -- # local accel_opc 00:05:48.823 14:10:30 -- accel/accel.sh@17 -- # local accel_module 00:05:48.823 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:48.823 14:10:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:48.823 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:48.823 14:10:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:48.823 14:10:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.823 14:10:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.823 14:10:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.823 14:10:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.823 14:10:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.823 14:10:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.823 14:10:30 -- accel/accel.sh@40 -- # local IFS=, 00:05:48.823 14:10:30 -- accel/accel.sh@41 -- # jq -r . 00:05:48.823 [2024-04-26 14:10:30.172130] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:48.823 [2024-04-26 14:10:30.172198] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071620 ] 00:05:48.823 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.823 [2024-04-26 14:10:30.230908] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.823 [2024-04-26 14:10:30.348589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.082 14:10:30 -- accel/accel.sh@20 -- # val= 00:05:49.082 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:49.082 14:10:30 -- accel/accel.sh@20 -- # val= 00:05:49.082 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:49.082 14:10:30 -- accel/accel.sh@20 -- # val= 00:05:49.082 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:49.082 14:10:30 -- accel/accel.sh@20 -- # val=0x1 00:05:49.082 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:49.082 14:10:30 -- accel/accel.sh@20 -- # val= 00:05:49.082 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:49.082 14:10:30 -- accel/accel.sh@20 -- # val= 00:05:49.082 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:49.082 14:10:30 -- accel/accel.sh@20 -- # val=decompress 00:05:49.082 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.082 14:10:30 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:49.082 14:10:30 -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:49.082 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:49.082 14:10:30 -- accel/accel.sh@20 -- # val= 00:05:49.082 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:49.082 14:10:30 -- accel/accel.sh@20 -- # val=software 00:05:49.082 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.082 14:10:30 -- accel/accel.sh@22 -- # accel_module=software 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:49.082 14:10:30 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:49.082 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:49.082 14:10:30 -- accel/accel.sh@20 -- # val=32 00:05:49.082 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:49.082 14:10:30 -- accel/accel.sh@20 -- # val=32 00:05:49.082 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:49.082 14:10:30 -- accel/accel.sh@20 -- # val=2 00:05:49.082 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:49.082 14:10:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.082 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:49.082 14:10:30 -- accel/accel.sh@20 -- # val=Yes 00:05:49.082 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:49.082 14:10:30 -- accel/accel.sh@20 -- # val= 00:05:49.082 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:49.082 14:10:30 -- accel/accel.sh@20 -- # val= 00:05:49.082 14:10:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # IFS=: 00:05:49.082 14:10:30 -- accel/accel.sh@19 -- # read -r var val 00:05:50.457 14:10:31 -- accel/accel.sh@20 -- # val= 00:05:50.457 14:10:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.457 14:10:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.457 14:10:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.457 14:10:31 -- accel/accel.sh@20 -- # val= 00:05:50.457 14:10:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.457 14:10:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.457 14:10:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.457 14:10:31 -- accel/accel.sh@20 -- # val= 00:05:50.457 14:10:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.457 14:10:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.457 14:10:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.457 14:10:31 -- accel/accel.sh@20 -- # val= 00:05:50.457 14:10:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.457 14:10:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.457 14:10:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.457 14:10:31 -- accel/accel.sh@20 -- # val= 00:05:50.457 14:10:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.457 14:10:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.457 14:10:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.457 14:10:31 -- accel/accel.sh@20 -- # val= 00:05:50.457 14:10:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.457 14:10:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.457 14:10:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.457 14:10:31 -- accel/accel.sh@20 -- # val= 00:05:50.457 14:10:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.457 14:10:31 -- accel/accel.sh@19 -- # IFS=: 00:05:50.457 14:10:31 -- accel/accel.sh@19 -- # read -r var val 00:05:50.457 14:10:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.457 14:10:31 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:50.457 14:10:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.457 00:05:50.457 real 0m1.451s 00:05:50.457 user 0m1.330s 00:05:50.457 sys 0m0.123s 00:05:50.457 14:10:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:50.457 14:10:31 -- common/autotest_common.sh@10 -- # set +x 00:05:50.457 ************************************ 00:05:50.457 END TEST accel_deomp_full_mthread 00:05:50.457 ************************************ 00:05:50.457 14:10:31 -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:50.457 14:10:31 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:50.457 14:10:31 -- accel/accel.sh@137 -- # build_accel_config 00:05:50.457 14:10:31 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:50.457 14:10:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.457 14:10:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.457 14:10:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.457 14:10:31 -- common/autotest_common.sh@10 -- # set +x 00:05:50.457 14:10:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.457 14:10:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.457 14:10:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.457 14:10:31 -- accel/accel.sh@40 -- # local IFS=, 00:05:50.457 14:10:31 -- accel/accel.sh@41 -- # jq -r . 00:05:50.457 ************************************ 00:05:50.457 START TEST accel_dif_functional_tests 00:05:50.457 ************************************ 00:05:50.457 14:10:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:50.457 [2024-04-26 14:10:31.783590] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:50.457 [2024-04-26 14:10:31.783689] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071812 ] 00:05:50.457 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.457 [2024-04-26 14:10:31.843284] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.457 [2024-04-26 14:10:31.962796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.457 [2024-04-26 14:10:31.962878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.457 [2024-04-26 14:10:31.962912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.716 00:05:50.716 00:05:50.716 CUnit - A unit testing framework for C - Version 2.1-3 00:05:50.716 http://cunit.sourceforge.net/ 00:05:50.716 00:05:50.716 00:05:50.716 Suite: accel_dif 00:05:50.716 Test: verify: DIF generated, GUARD check ...passed 00:05:50.716 Test: verify: DIF generated, APPTAG check ...passed 00:05:50.716 Test: verify: DIF generated, REFTAG check ...passed 00:05:50.716 Test: verify: DIF not generated, GUARD check ...[2024-04-26 14:10:32.049245] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:50.716 [2024-04-26 14:10:32.049311] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:50.716 passed 00:05:50.716 Test: verify: DIF not generated, APPTAG check ...[2024-04-26 14:10:32.049367] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:50.716 [2024-04-26 14:10:32.049398] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:50.716 passed 00:05:50.716 Test: verify: DIF not generated, REFTAG check ...[2024-04-26 14:10:32.049444] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:50.716 [2024-04-26 14:10:32.049475] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:50.716 passed 00:05:50.716 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:50.716 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-26 14:10:32.049568] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:50.716 passed 00:05:50.716 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:50.716 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:50.716 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:50.716 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-26 14:10:32.049776] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:50.716 passed 00:05:50.716 Test: generate copy: DIF generated, GUARD check ...passed 00:05:50.716 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:50.716 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:50.716 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:50.716 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:50.716 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:50.716 Test: generate copy: iovecs-len validate ...[2024-04-26 14:10:32.050072] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:50.716 passed 00:05:50.716 Test: generate copy: buffer alignment validate ...passed 00:05:50.716 00:05:50.716 Run Summary: Type Total Ran Passed Failed Inactive 00:05:50.716 suites 1 1 n/a 0 0 00:05:50.716 tests 20 20 20 0 0 00:05:50.716 asserts 204 204 204 0 n/a 00:05:50.716 00:05:50.716 Elapsed time = 0.003 seconds 00:05:50.716 00:05:50.716 real 0m0.507s 00:05:50.716 user 0m0.671s 00:05:50.716 sys 0m0.157s 00:05:50.716 14:10:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:50.716 14:10:32 -- common/autotest_common.sh@10 -- # set +x 00:05:50.716 ************************************ 00:05:50.716 END TEST accel_dif_functional_tests 00:05:50.716 ************************************ 00:05:50.716 00:05:50.716 real 0m34.117s 00:05:50.716 user 0m36.139s 00:05:50.716 sys 0m5.198s 00:05:50.716 14:10:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:50.716 14:10:32 -- common/autotest_common.sh@10 -- # set +x 00:05:50.716 ************************************ 00:05:50.716 END TEST accel 00:05:50.716 ************************************ 00:05:50.974 14:10:32 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:50.974 14:10:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.974 14:10:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.974 14:10:32 -- common/autotest_common.sh@10 -- # set +x 00:05:50.974 ************************************ 00:05:50.974 START TEST accel_rpc 00:05:50.974 ************************************ 00:05:50.974 14:10:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:50.974 * Looking for test storage... 00:05:50.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:50.974 14:10:32 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:50.974 14:10:32 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3071916 00:05:50.974 14:10:32 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:50.974 14:10:32 -- accel/accel_rpc.sh@15 -- # waitforlisten 3071916 00:05:50.974 14:10:32 -- common/autotest_common.sh@817 -- # '[' -z 3071916 ']' 00:05:50.974 14:10:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.974 14:10:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:50.974 14:10:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.974 14:10:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:50.974 14:10:32 -- common/autotest_common.sh@10 -- # set +x 00:05:50.974 [2024-04-26 14:10:32.508547] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:50.974 [2024-04-26 14:10:32.508656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071916 ] 00:05:50.974 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.233 [2024-04-26 14:10:32.586320] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.233 [2024-04-26 14:10:32.735866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.195 14:10:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:52.195 14:10:33 -- common/autotest_common.sh@850 -- # return 0 00:05:52.195 14:10:33 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:52.195 14:10:33 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:52.195 14:10:33 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:52.195 14:10:33 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:52.195 14:10:33 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:52.195 14:10:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:52.195 14:10:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.195 14:10:33 -- common/autotest_common.sh@10 -- # set +x 00:05:52.195 ************************************ 00:05:52.195 START TEST accel_assign_opcode 00:05:52.195 ************************************ 00:05:52.195 14:10:33 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:05:52.195 14:10:33 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:52.195 14:10:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:52.195 14:10:33 -- common/autotest_common.sh@10 -- # set +x 00:05:52.195 [2024-04-26 14:10:33.630792] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:52.195 14:10:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:52.195 14:10:33 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:52.195 14:10:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:52.195 14:10:33 -- common/autotest_common.sh@10 -- # set +x 00:05:52.195 [2024-04-26 14:10:33.638768] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:52.195 14:10:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:52.195 14:10:33 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:52.195 14:10:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:52.195 14:10:33 -- common/autotest_common.sh@10 -- # set +x 00:05:52.454 14:10:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:52.454 14:10:33 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:52.454 14:10:33 -- accel/accel_rpc.sh@42 -- # grep software 00:05:52.454 14:10:33 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:52.454 14:10:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:52.454 14:10:33 -- common/autotest_common.sh@10 -- # set +x 00:05:52.454 14:10:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:52.454 software 00:05:52.454 00:05:52.454 real 0m0.271s 00:05:52.454 user 0m0.040s 00:05:52.454 sys 0m0.008s 00:05:52.454 14:10:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:52.454 14:10:33 -- common/autotest_common.sh@10 -- # set +x 00:05:52.454 ************************************ 00:05:52.454 END TEST accel_assign_opcode 00:05:52.454 ************************************ 00:05:52.454 14:10:33 -- accel/accel_rpc.sh@55 -- # killprocess 3071916 00:05:52.454 14:10:33 -- common/autotest_common.sh@936 -- # '[' -z 3071916 ']' 00:05:52.454 14:10:33 -- common/autotest_common.sh@940 -- # kill -0 3071916 00:05:52.454 14:10:33 -- common/autotest_common.sh@941 -- # uname 00:05:52.454 14:10:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:52.454 14:10:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3071916 00:05:52.454 14:10:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:52.454 14:10:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:52.454 14:10:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3071916' 00:05:52.454 killing process with pid 3071916 00:05:52.454 14:10:33 -- common/autotest_common.sh@955 -- # kill 3071916 00:05:52.454 14:10:33 -- common/autotest_common.sh@960 -- # wait 3071916 00:05:52.713 00:05:52.713 real 0m1.865s 00:05:52.713 user 0m2.125s 00:05:52.713 sys 0m0.463s 00:05:52.713 14:10:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:52.713 14:10:34 -- common/autotest_common.sh@10 -- # set +x 00:05:52.713 ************************************ 00:05:52.713 END TEST accel_rpc 00:05:52.713 ************************************ 00:05:52.973 14:10:34 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:52.973 14:10:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:52.973 14:10:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.973 14:10:34 -- common/autotest_common.sh@10 -- # set +x 00:05:52.973 ************************************ 00:05:52.973 START TEST app_cmdline 00:05:52.973 ************************************ 00:05:52.973 14:10:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:52.973 * Looking for test storage... 00:05:52.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:52.973 14:10:34 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:52.973 14:10:34 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3072200 00:05:52.973 14:10:34 -- app/cmdline.sh@18 -- # waitforlisten 3072200 00:05:52.973 14:10:34 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:52.973 14:10:34 -- common/autotest_common.sh@817 -- # '[' -z 3072200 ']' 00:05:52.973 14:10:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.973 14:10:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:52.973 14:10:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.973 14:10:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:52.973 14:10:34 -- common/autotest_common.sh@10 -- # set +x 00:05:52.973 [2024-04-26 14:10:34.520366] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:05:52.973 [2024-04-26 14:10:34.520460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072200 ] 00:05:53.232 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.232 [2024-04-26 14:10:34.580161] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.232 [2024-04-26 14:10:34.694926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.490 14:10:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:53.490 14:10:34 -- common/autotest_common.sh@850 -- # return 0 00:05:53.490 14:10:34 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:53.749 { 00:05:53.749 "version": "SPDK v24.05-pre git sha1 7f48663af", 00:05:53.749 "fields": { 00:05:53.749 "major": 24, 00:05:53.749 "minor": 5, 00:05:53.749 "patch": 0, 00:05:53.749 "suffix": "-pre", 00:05:53.749 "commit": "7f48663af" 00:05:53.749 } 00:05:53.749 } 00:05:53.749 14:10:35 -- app/cmdline.sh@22 -- # expected_methods=() 00:05:53.749 14:10:35 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:53.749 14:10:35 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:53.749 14:10:35 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:53.749 14:10:35 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:53.749 14:10:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:53.749 14:10:35 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:53.749 14:10:35 -- common/autotest_common.sh@10 -- # set +x 00:05:53.749 14:10:35 -- app/cmdline.sh@26 -- # sort 00:05:53.749 14:10:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:53.749 14:10:35 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:53.749 14:10:35 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:53.749 14:10:35 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:53.749 14:10:35 -- common/autotest_common.sh@638 -- # local es=0 00:05:53.749 14:10:35 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:53.749 14:10:35 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:53.749 14:10:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:53.749 14:10:35 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:53.749 14:10:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:53.749 14:10:35 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:53.749 14:10:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:53.749 14:10:35 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:53.749 14:10:35 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:53.749 14:10:35 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:54.008 request: 00:05:54.008 { 00:05:54.008 "method": "env_dpdk_get_mem_stats", 00:05:54.008 "req_id": 1 00:05:54.008 } 00:05:54.008 Got JSON-RPC error response 00:05:54.008 response: 00:05:54.008 { 00:05:54.008 "code": -32601, 00:05:54.008 "message": "Method not found" 00:05:54.008 } 00:05:54.008 14:10:35 -- common/autotest_common.sh@641 -- # es=1 00:05:54.008 14:10:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:54.008 14:10:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:54.008 14:10:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:54.008 14:10:35 -- app/cmdline.sh@1 -- # killprocess 3072200 00:05:54.008 14:10:35 -- common/autotest_common.sh@936 -- # '[' -z 3072200 ']' 00:05:54.008 14:10:35 -- common/autotest_common.sh@940 -- # kill -0 3072200 00:05:54.008 14:10:35 -- common/autotest_common.sh@941 -- # uname 00:05:54.008 14:10:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.008 14:10:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3072200 00:05:54.267 14:10:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.267 14:10:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.267 14:10:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3072200' 00:05:54.267 killing process with pid 3072200 00:05:54.267 14:10:35 -- common/autotest_common.sh@955 -- # kill 3072200 00:05:54.267 14:10:35 -- common/autotest_common.sh@960 -- # wait 3072200 00:05:54.525 00:05:54.525 real 0m1.496s 00:05:54.525 user 0m1.954s 00:05:54.525 sys 0m0.443s 00:05:54.525 14:10:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.525 14:10:35 -- common/autotest_common.sh@10 -- # set +x 00:05:54.525 ************************************ 00:05:54.525 END TEST app_cmdline 00:05:54.525 ************************************ 00:05:54.525 14:10:35 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:54.525 14:10:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.525 14:10:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.525 14:10:35 -- common/autotest_common.sh@10 -- # set +x 00:05:54.525 ************************************ 00:05:54.525 START TEST version 00:05:54.525 ************************************ 00:05:54.525 14:10:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:54.525 * Looking for test storage... 00:05:54.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:54.525 14:10:36 -- app/version.sh@17 -- # get_header_version major 00:05:54.525 14:10:36 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:54.525 14:10:36 -- app/version.sh@14 -- # cut -f2 00:05:54.784 14:10:36 -- app/version.sh@14 -- # tr -d '"' 00:05:54.784 14:10:36 -- app/version.sh@17 -- # major=24 00:05:54.784 14:10:36 -- app/version.sh@18 -- # get_header_version minor 00:05:54.785 14:10:36 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:54.785 14:10:36 -- app/version.sh@14 -- # cut -f2 00:05:54.785 14:10:36 -- app/version.sh@14 -- # tr -d '"' 00:05:54.785 14:10:36 -- app/version.sh@18 -- # minor=5 00:05:54.785 14:10:36 -- app/version.sh@19 -- # get_header_version patch 00:05:54.785 14:10:36 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:54.785 14:10:36 -- app/version.sh@14 -- # cut -f2 00:05:54.785 14:10:36 -- app/version.sh@14 -- # tr -d '"' 00:05:54.785 14:10:36 -- app/version.sh@19 -- # patch=0 00:05:54.785 14:10:36 -- app/version.sh@20 -- # get_header_version suffix 00:05:54.785 14:10:36 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:54.785 14:10:36 -- app/version.sh@14 -- # cut -f2 00:05:54.785 14:10:36 -- app/version.sh@14 -- # tr -d '"' 00:05:54.785 14:10:36 -- app/version.sh@20 -- # suffix=-pre 00:05:54.785 14:10:36 -- app/version.sh@22 -- # version=24.5 00:05:54.785 14:10:36 -- app/version.sh@25 -- # (( patch != 0 )) 00:05:54.785 14:10:36 -- app/version.sh@28 -- # version=24.5rc0 00:05:54.785 14:10:36 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:54.785 14:10:36 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:54.785 14:10:36 -- app/version.sh@30 -- # py_version=24.5rc0 00:05:54.785 14:10:36 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:05:54.785 00:05:54.785 real 0m0.107s 00:05:54.785 user 0m0.056s 00:05:54.785 sys 0m0.072s 00:05:54.785 14:10:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.785 14:10:36 -- common/autotest_common.sh@10 -- # set +x 00:05:54.785 ************************************ 00:05:54.785 END TEST version 00:05:54.785 ************************************ 00:05:54.785 14:10:36 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:05:54.785 14:10:36 -- spdk/autotest.sh@194 -- # uname -s 00:05:54.785 14:10:36 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:54.785 14:10:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:54.785 14:10:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:54.785 14:10:36 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:54.785 14:10:36 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:05:54.785 14:10:36 -- spdk/autotest.sh@258 -- # timing_exit lib 00:05:54.785 14:10:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:54.785 14:10:36 -- common/autotest_common.sh@10 -- # set +x 00:05:54.785 14:10:36 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:05:54.785 14:10:36 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:05:54.785 14:10:36 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:05:54.785 14:10:36 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:05:54.785 14:10:36 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:05:54.785 14:10:36 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:05:54.785 14:10:36 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:54.785 14:10:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:54.785 14:10:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.785 14:10:36 -- common/autotest_common.sh@10 -- # set +x 00:05:54.785 ************************************ 00:05:54.785 START TEST nvmf_tcp 00:05:54.785 ************************************ 00:05:54.785 14:10:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:54.785 * Looking for test storage... 00:05:55.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:55.044 14:10:36 -- nvmf/nvmf.sh@10 -- # uname -s 00:05:55.044 14:10:36 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:55.044 14:10:36 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:55.044 14:10:36 -- nvmf/common.sh@7 -- # uname -s 00:05:55.044 14:10:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.044 14:10:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.045 14:10:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.045 14:10:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.045 14:10:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.045 14:10:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.045 14:10:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.045 14:10:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.045 14:10:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.045 14:10:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.045 14:10:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:05:55.045 14:10:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:05:55.045 14:10:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.045 14:10:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.045 14:10:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:55.045 14:10:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.045 14:10:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:55.045 14:10:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.045 14:10:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.045 14:10:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.045 14:10:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.045 14:10:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.045 14:10:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.045 14:10:36 -- paths/export.sh@5 -- # export PATH 00:05:55.045 14:10:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.045 14:10:36 -- nvmf/common.sh@47 -- # : 0 00:05:55.045 14:10:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:55.045 14:10:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:55.045 14:10:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.045 14:10:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.045 14:10:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.045 14:10:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:55.045 14:10:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:55.045 14:10:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:55.045 14:10:36 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:55.045 14:10:36 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:55.045 14:10:36 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:55.045 14:10:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:55.045 14:10:36 -- common/autotest_common.sh@10 -- # set +x 00:05:55.045 14:10:36 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:55.045 14:10:36 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:55.045 14:10:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:55.045 14:10:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.045 14:10:36 -- common/autotest_common.sh@10 -- # set +x 00:05:55.045 ************************************ 00:05:55.045 START TEST nvmf_example 00:05:55.045 ************************************ 00:05:55.045 14:10:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:55.045 * Looking for test storage... 00:05:55.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:55.045 14:10:36 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:55.045 14:10:36 -- nvmf/common.sh@7 -- # uname -s 00:05:55.045 14:10:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.045 14:10:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.045 14:10:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.045 14:10:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.045 14:10:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.045 14:10:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.045 14:10:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.045 14:10:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.045 14:10:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.045 14:10:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.045 14:10:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:05:55.045 14:10:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:05:55.045 14:10:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.045 14:10:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.045 14:10:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:55.045 14:10:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.045 14:10:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:55.045 14:10:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.045 14:10:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.045 14:10:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.045 14:10:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.045 14:10:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.045 14:10:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.045 14:10:36 -- paths/export.sh@5 -- # export PATH 00:05:55.045 14:10:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.045 14:10:36 -- nvmf/common.sh@47 -- # : 0 00:05:55.045 14:10:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:55.045 14:10:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:55.045 14:10:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.045 14:10:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.045 14:10:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.045 14:10:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:55.045 14:10:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:55.045 14:10:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:55.045 14:10:36 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:55.045 14:10:36 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:55.045 14:10:36 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:55.045 14:10:36 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:55.045 14:10:36 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:55.045 14:10:36 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:55.045 14:10:36 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:55.045 14:10:36 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:55.045 14:10:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:55.045 14:10:36 -- common/autotest_common.sh@10 -- # set +x 00:05:55.045 14:10:36 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:55.045 14:10:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:05:55.045 14:10:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:55.045 14:10:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:05:55.045 14:10:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:05:55.045 14:10:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:05:55.045 14:10:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:55.045 14:10:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:55.045 14:10:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:55.045 14:10:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:05:55.045 14:10:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:05:55.045 14:10:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:05:55.045 14:10:36 -- common/autotest_common.sh@10 -- # set +x 00:05:56.950 14:10:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:56.950 14:10:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:05:56.950 14:10:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:56.950 14:10:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:56.951 14:10:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:56.951 14:10:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:56.951 14:10:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:56.951 14:10:38 -- nvmf/common.sh@295 -- # net_devs=() 00:05:56.951 14:10:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:56.951 14:10:38 -- nvmf/common.sh@296 -- # e810=() 00:05:56.951 14:10:38 -- nvmf/common.sh@296 -- # local -ga e810 00:05:56.951 14:10:38 -- nvmf/common.sh@297 -- # x722=() 00:05:56.951 14:10:38 -- nvmf/common.sh@297 -- # local -ga x722 00:05:56.951 14:10:38 -- nvmf/common.sh@298 -- # mlx=() 00:05:56.951 14:10:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:05:56.951 14:10:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:56.951 14:10:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:56.951 14:10:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:56.951 14:10:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:56.951 14:10:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:56.951 14:10:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:56.951 14:10:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:56.951 14:10:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:56.951 14:10:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:56.951 14:10:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:56.951 14:10:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:56.951 14:10:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:56.951 14:10:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:56.951 14:10:38 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:56.951 14:10:38 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:56.951 14:10:38 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:56.951 14:10:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:56.951 14:10:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:56.951 14:10:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:05:56.951 Found 0000:08:00.0 (0x8086 - 0x159b) 00:05:56.951 14:10:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:56.951 14:10:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:56.951 14:10:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:56.951 14:10:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:56.951 14:10:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:56.951 14:10:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:56.951 14:10:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:05:56.951 Found 0000:08:00.1 (0x8086 - 0x159b) 00:05:56.951 14:10:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:56.951 14:10:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:56.951 14:10:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:56.951 14:10:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:56.951 14:10:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:56.951 14:10:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:56.951 14:10:38 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:56.951 14:10:38 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:56.951 14:10:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:56.951 14:10:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:56.951 14:10:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:56.951 14:10:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:56.951 14:10:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:05:56.951 Found net devices under 0000:08:00.0: cvl_0_0 00:05:56.951 14:10:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:56.951 14:10:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:56.951 14:10:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:56.951 14:10:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:56.951 14:10:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:56.951 14:10:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:05:56.951 Found net devices under 0000:08:00.1: cvl_0_1 00:05:56.951 14:10:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:56.951 14:10:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:05:56.951 14:10:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:05:56.951 14:10:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:05:56.951 14:10:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:05:56.951 14:10:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:05:56.951 14:10:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:56.951 14:10:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:56.951 14:10:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:56.951 14:10:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:56.951 14:10:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:56.951 14:10:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:56.951 14:10:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:56.951 14:10:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:56.951 14:10:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:56.951 14:10:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:56.951 14:10:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:56.951 14:10:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:56.951 14:10:38 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:56.951 14:10:38 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:56.951 14:10:38 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:56.951 14:10:38 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:56.951 14:10:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:56.951 14:10:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:56.951 14:10:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:56.951 14:10:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:56.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:56.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:05:56.951 00:05:56.951 --- 10.0.0.2 ping statistics --- 00:05:56.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:56.951 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:05:56.951 14:10:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:56.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:56.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:05:56.951 00:05:56.951 --- 10.0.0.1 ping statistics --- 00:05:56.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:56.951 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:05:56.951 14:10:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:56.951 14:10:38 -- nvmf/common.sh@411 -- # return 0 00:05:56.951 14:10:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:05:56.951 14:10:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:56.951 14:10:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:05:56.951 14:10:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:05:56.951 14:10:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:56.951 14:10:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:05:56.951 14:10:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:05:56.951 14:10:38 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:56.951 14:10:38 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:56.951 14:10:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:56.951 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:05:56.951 14:10:38 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:56.951 14:10:38 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:56.951 14:10:38 -- target/nvmf_example.sh@34 -- # nvmfpid=3073729 00:05:56.951 14:10:38 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:56.951 14:10:38 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:56.951 14:10:38 -- target/nvmf_example.sh@36 -- # waitforlisten 3073729 00:05:56.951 14:10:38 -- common/autotest_common.sh@817 -- # '[' -z 3073729 ']' 00:05:56.951 14:10:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.951 14:10:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:56.951 14:10:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.951 14:10:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:56.951 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:05:56.951 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.210 14:10:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:57.210 14:10:38 -- common/autotest_common.sh@850 -- # return 0 00:05:57.210 14:10:38 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:57.210 14:10:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:57.210 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:05:57.210 14:10:38 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:57.210 14:10:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:57.210 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:05:57.210 14:10:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:57.211 14:10:38 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:57.211 14:10:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:57.211 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:05:57.211 14:10:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:57.211 14:10:38 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:57.211 14:10:38 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:57.211 14:10:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:57.211 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:05:57.211 14:10:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:57.211 14:10:38 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:57.211 14:10:38 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:57.211 14:10:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:57.211 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:05:57.211 14:10:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:57.211 14:10:38 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:57.211 14:10:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:57.211 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:05:57.211 14:10:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:57.211 14:10:38 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:57.211 14:10:38 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:57.211 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.409 Initializing NVMe Controllers 00:06:09.409 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:09.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:09.409 Initialization complete. Launching workers. 00:06:09.409 ======================================================== 00:06:09.409 Latency(us) 00:06:09.409 Device Information : IOPS MiB/s Average min max 00:06:09.410 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13808.53 53.94 4634.41 735.67 19084.82 00:06:09.410 ======================================================== 00:06:09.410 Total : 13808.53 53.94 4634.41 735.67 19084.82 00:06:09.410 00:06:09.410 14:10:48 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:09.410 14:10:48 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:09.410 14:10:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:09.410 14:10:48 -- nvmf/common.sh@117 -- # sync 00:06:09.410 14:10:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:09.410 14:10:48 -- nvmf/common.sh@120 -- # set +e 00:06:09.410 14:10:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:09.410 14:10:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:09.410 rmmod nvme_tcp 00:06:09.410 rmmod nvme_fabrics 00:06:09.410 rmmod nvme_keyring 00:06:09.410 14:10:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:09.410 14:10:48 -- nvmf/common.sh@124 -- # set -e 00:06:09.410 14:10:48 -- nvmf/common.sh@125 -- # return 0 00:06:09.410 14:10:48 -- nvmf/common.sh@478 -- # '[' -n 3073729 ']' 00:06:09.410 14:10:48 -- nvmf/common.sh@479 -- # killprocess 3073729 00:06:09.410 14:10:48 -- common/autotest_common.sh@936 -- # '[' -z 3073729 ']' 00:06:09.410 14:10:48 -- common/autotest_common.sh@940 -- # kill -0 3073729 00:06:09.410 14:10:48 -- common/autotest_common.sh@941 -- # uname 00:06:09.410 14:10:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:09.410 14:10:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3073729 00:06:09.410 14:10:48 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:06:09.410 14:10:48 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:06:09.410 14:10:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3073729' 00:06:09.410 killing process with pid 3073729 00:06:09.410 14:10:48 -- common/autotest_common.sh@955 -- # kill 3073729 00:06:09.410 14:10:48 -- common/autotest_common.sh@960 -- # wait 3073729 00:06:09.410 nvmf threads initialize successfully 00:06:09.410 bdev subsystem init successfully 00:06:09.410 created a nvmf target service 00:06:09.410 create targets's poll groups done 00:06:09.410 all subsystems of target started 00:06:09.410 nvmf target is running 00:06:09.410 all subsystems of target stopped 00:06:09.410 destroy targets's poll groups done 00:06:09.410 destroyed the nvmf target service 00:06:09.410 bdev subsystem finish successfully 00:06:09.410 nvmf threads destroy successfully 00:06:09.410 14:10:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:09.410 14:10:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:09.410 14:10:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:09.410 14:10:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:09.410 14:10:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:09.410 14:10:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:09.410 14:10:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:09.410 14:10:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.669 14:10:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:09.669 14:10:51 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:09.669 14:10:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:09.669 14:10:51 -- common/autotest_common.sh@10 -- # set +x 00:06:09.669 00:06:09.669 real 0m14.671s 00:06:09.669 user 0m40.920s 00:06:09.669 sys 0m3.223s 00:06:09.669 14:10:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:09.669 14:10:51 -- common/autotest_common.sh@10 -- # set +x 00:06:09.669 ************************************ 00:06:09.669 END TEST nvmf_example 00:06:09.669 ************************************ 00:06:09.669 14:10:51 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:09.669 14:10:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:09.669 14:10:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.669 14:10:51 -- common/autotest_common.sh@10 -- # set +x 00:06:09.930 ************************************ 00:06:09.930 START TEST nvmf_filesystem 00:06:09.930 ************************************ 00:06:09.930 14:10:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:09.930 * Looking for test storage... 00:06:09.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:09.930 14:10:51 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:09.930 14:10:51 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:09.930 14:10:51 -- common/autotest_common.sh@34 -- # set -e 00:06:09.930 14:10:51 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:09.930 14:10:51 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:09.930 14:10:51 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:09.930 14:10:51 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:09.930 14:10:51 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:09.930 14:10:51 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:09.930 14:10:51 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:09.930 14:10:51 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:09.930 14:10:51 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:09.930 14:10:51 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:09.930 14:10:51 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:09.930 14:10:51 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:09.930 14:10:51 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:09.930 14:10:51 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:09.930 14:10:51 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:09.930 14:10:51 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:09.930 14:10:51 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:09.930 14:10:51 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:09.930 14:10:51 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:09.930 14:10:51 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:09.930 14:10:51 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:09.930 14:10:51 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:09.930 14:10:51 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:09.930 14:10:51 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:09.930 14:10:51 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:09.930 14:10:51 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:09.930 14:10:51 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:09.930 14:10:51 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:09.930 14:10:51 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:09.930 14:10:51 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:09.930 14:10:51 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:09.930 14:10:51 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:09.930 14:10:51 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:09.930 14:10:51 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:09.930 14:10:51 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:09.930 14:10:51 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:09.930 14:10:51 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:09.930 14:10:51 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:09.930 14:10:51 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:09.930 14:10:51 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:09.930 14:10:51 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:09.930 14:10:51 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:09.930 14:10:51 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:09.930 14:10:51 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:09.930 14:10:51 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:09.930 14:10:51 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:09.930 14:10:51 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:09.930 14:10:51 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:09.930 14:10:51 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:09.930 14:10:51 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:09.930 14:10:51 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:09.930 14:10:51 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:09.930 14:10:51 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:09.930 14:10:51 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:09.930 14:10:51 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:09.930 14:10:51 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:06:09.930 14:10:51 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:09.930 14:10:51 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:06:09.930 14:10:51 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:06:09.930 14:10:51 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:06:09.930 14:10:51 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:06:09.930 14:10:51 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:06:09.930 14:10:51 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:06:09.930 14:10:51 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:06:09.930 14:10:51 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:06:09.930 14:10:51 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:06:09.930 14:10:51 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:06:09.930 14:10:51 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:06:09.930 14:10:51 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:06:09.930 14:10:51 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:06:09.930 14:10:51 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:06:09.930 14:10:51 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:06:09.930 14:10:51 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:09.930 14:10:51 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:06:09.930 14:10:51 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:06:09.930 14:10:51 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:06:09.930 14:10:51 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:06:09.930 14:10:51 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:06:09.930 14:10:51 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:06:09.930 14:10:51 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:06:09.930 14:10:51 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:06:09.930 14:10:51 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:06:09.930 14:10:51 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:06:09.930 14:10:51 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:06:09.930 14:10:51 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:09.930 14:10:51 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:06:09.930 14:10:51 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:06:09.930 14:10:51 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:09.930 14:10:51 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:09.930 14:10:51 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:09.930 14:10:51 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:09.930 14:10:51 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:09.930 14:10:51 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:09.931 14:10:51 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:09.931 14:10:51 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:09.931 14:10:51 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:09.931 14:10:51 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:09.931 14:10:51 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:09.931 14:10:51 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:09.931 14:10:51 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:09.931 14:10:51 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:09.931 14:10:51 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:09.931 14:10:51 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:09.931 #define SPDK_CONFIG_H 00:06:09.931 #define SPDK_CONFIG_APPS 1 00:06:09.931 #define SPDK_CONFIG_ARCH native 00:06:09.931 #undef SPDK_CONFIG_ASAN 00:06:09.931 #undef SPDK_CONFIG_AVAHI 00:06:09.931 #undef SPDK_CONFIG_CET 00:06:09.931 #define SPDK_CONFIG_COVERAGE 1 00:06:09.931 #define SPDK_CONFIG_CROSS_PREFIX 00:06:09.931 #undef SPDK_CONFIG_CRYPTO 00:06:09.931 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:09.931 #undef SPDK_CONFIG_CUSTOMOCF 00:06:09.931 #undef SPDK_CONFIG_DAOS 00:06:09.931 #define SPDK_CONFIG_DAOS_DIR 00:06:09.931 #define SPDK_CONFIG_DEBUG 1 00:06:09.931 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:09.931 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:09.931 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:09.931 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:09.931 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:09.931 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:09.931 #define SPDK_CONFIG_EXAMPLES 1 00:06:09.931 #undef SPDK_CONFIG_FC 00:06:09.931 #define SPDK_CONFIG_FC_PATH 00:06:09.931 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:09.931 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:09.931 #undef SPDK_CONFIG_FUSE 00:06:09.931 #undef SPDK_CONFIG_FUZZER 00:06:09.931 #define SPDK_CONFIG_FUZZER_LIB 00:06:09.931 #undef SPDK_CONFIG_GOLANG 00:06:09.931 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:09.931 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:09.931 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:09.931 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:09.931 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:09.931 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:09.931 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:09.931 #define SPDK_CONFIG_IDXD 1 00:06:09.931 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:09.931 #undef SPDK_CONFIG_IPSEC_MB 00:06:09.931 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:09.931 #define SPDK_CONFIG_ISAL 1 00:06:09.931 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:09.931 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:09.931 #define SPDK_CONFIG_LIBDIR 00:06:09.931 #undef SPDK_CONFIG_LTO 00:06:09.931 #define SPDK_CONFIG_MAX_LCORES 00:06:09.931 #define SPDK_CONFIG_NVME_CUSE 1 00:06:09.931 #undef SPDK_CONFIG_OCF 00:06:09.931 #define SPDK_CONFIG_OCF_PATH 00:06:09.931 #define SPDK_CONFIG_OPENSSL_PATH 00:06:09.931 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:09.931 #define SPDK_CONFIG_PGO_DIR 00:06:09.931 #undef SPDK_CONFIG_PGO_USE 00:06:09.931 #define SPDK_CONFIG_PREFIX /usr/local 00:06:09.931 #undef SPDK_CONFIG_RAID5F 00:06:09.931 #undef SPDK_CONFIG_RBD 00:06:09.931 #define SPDK_CONFIG_RDMA 1 00:06:09.931 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:09.931 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:09.931 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:09.931 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:09.931 #define SPDK_CONFIG_SHARED 1 00:06:09.931 #undef SPDK_CONFIG_SMA 00:06:09.931 #define SPDK_CONFIG_TESTS 1 00:06:09.931 #undef SPDK_CONFIG_TSAN 00:06:09.931 #define SPDK_CONFIG_UBLK 1 00:06:09.931 #define SPDK_CONFIG_UBSAN 1 00:06:09.931 #undef SPDK_CONFIG_UNIT_TESTS 00:06:09.931 #undef SPDK_CONFIG_URING 00:06:09.931 #define SPDK_CONFIG_URING_PATH 00:06:09.931 #undef SPDK_CONFIG_URING_ZNS 00:06:09.931 #undef SPDK_CONFIG_USDT 00:06:09.931 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:09.931 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:09.931 #define SPDK_CONFIG_VFIO_USER 1 00:06:09.931 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:09.931 #define SPDK_CONFIG_VHOST 1 00:06:09.931 #define SPDK_CONFIG_VIRTIO 1 00:06:09.931 #undef SPDK_CONFIG_VTUNE 00:06:09.931 #define SPDK_CONFIG_VTUNE_DIR 00:06:09.931 #define SPDK_CONFIG_WERROR 1 00:06:09.931 #define SPDK_CONFIG_WPDK_DIR 00:06:09.931 #undef SPDK_CONFIG_XNVME 00:06:09.931 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:09.931 14:10:51 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:09.931 14:10:51 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:09.931 14:10:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.931 14:10:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.931 14:10:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.931 14:10:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.931 14:10:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.931 14:10:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.931 14:10:51 -- paths/export.sh@5 -- # export PATH 00:06:09.931 14:10:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.931 14:10:51 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:09.931 14:10:51 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:09.931 14:10:51 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:09.931 14:10:51 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:09.931 14:10:51 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:09.931 14:10:51 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:09.931 14:10:51 -- pm/common@67 -- # TEST_TAG=N/A 00:06:09.931 14:10:51 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:09.931 14:10:51 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:09.931 14:10:51 -- pm/common@71 -- # uname -s 00:06:09.931 14:10:51 -- pm/common@71 -- # PM_OS=Linux 00:06:09.931 14:10:51 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:09.931 14:10:51 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:06:09.931 14:10:51 -- pm/common@76 -- # [[ Linux == Linux ]] 00:06:09.931 14:10:51 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:06:09.931 14:10:51 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:06:09.931 14:10:51 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:09.931 14:10:51 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:09.931 14:10:51 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:06:09.931 14:10:51 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:06:09.931 14:10:51 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:09.931 14:10:51 -- common/autotest_common.sh@57 -- # : 0 00:06:09.931 14:10:51 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:09.931 14:10:51 -- common/autotest_common.sh@61 -- # : 0 00:06:09.931 14:10:51 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:09.931 14:10:51 -- common/autotest_common.sh@63 -- # : 0 00:06:09.931 14:10:51 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:09.931 14:10:51 -- common/autotest_common.sh@65 -- # : 1 00:06:09.931 14:10:51 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:09.931 14:10:51 -- common/autotest_common.sh@67 -- # : 0 00:06:09.931 14:10:51 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:09.931 14:10:51 -- common/autotest_common.sh@69 -- # : 00:06:09.931 14:10:51 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:09.931 14:10:51 -- common/autotest_common.sh@71 -- # : 0 00:06:09.931 14:10:51 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:09.931 14:10:51 -- common/autotest_common.sh@73 -- # : 0 00:06:09.931 14:10:51 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:09.931 14:10:51 -- common/autotest_common.sh@75 -- # : 0 00:06:09.931 14:10:51 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:09.931 14:10:51 -- common/autotest_common.sh@77 -- # : 0 00:06:09.931 14:10:51 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:09.931 14:10:51 -- common/autotest_common.sh@79 -- # : 0 00:06:09.931 14:10:51 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:09.931 14:10:51 -- common/autotest_common.sh@81 -- # : 0 00:06:09.931 14:10:51 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:09.931 14:10:51 -- common/autotest_common.sh@83 -- # : 0 00:06:09.931 14:10:51 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:09.931 14:10:51 -- common/autotest_common.sh@85 -- # : 1 00:06:09.931 14:10:51 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:09.932 14:10:51 -- common/autotest_common.sh@87 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:09.932 14:10:51 -- common/autotest_common.sh@89 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:09.932 14:10:51 -- common/autotest_common.sh@91 -- # : 1 00:06:09.932 14:10:51 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:09.932 14:10:51 -- common/autotest_common.sh@93 -- # : 1 00:06:09.932 14:10:51 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:09.932 14:10:51 -- common/autotest_common.sh@95 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:09.932 14:10:51 -- common/autotest_common.sh@97 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:09.932 14:10:51 -- common/autotest_common.sh@99 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:09.932 14:10:51 -- common/autotest_common.sh@101 -- # : tcp 00:06:09.932 14:10:51 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:09.932 14:10:51 -- common/autotest_common.sh@103 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:09.932 14:10:51 -- common/autotest_common.sh@105 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:09.932 14:10:51 -- common/autotest_common.sh@107 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:09.932 14:10:51 -- common/autotest_common.sh@109 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:09.932 14:10:51 -- common/autotest_common.sh@111 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:09.932 14:10:51 -- common/autotest_common.sh@113 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:09.932 14:10:51 -- common/autotest_common.sh@115 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:09.932 14:10:51 -- common/autotest_common.sh@117 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:09.932 14:10:51 -- common/autotest_common.sh@119 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:09.932 14:10:51 -- common/autotest_common.sh@121 -- # : 1 00:06:09.932 14:10:51 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:09.932 14:10:51 -- common/autotest_common.sh@123 -- # : 00:06:09.932 14:10:51 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:09.932 14:10:51 -- common/autotest_common.sh@125 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:09.932 14:10:51 -- common/autotest_common.sh@127 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:09.932 14:10:51 -- common/autotest_common.sh@129 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:09.932 14:10:51 -- common/autotest_common.sh@131 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:09.932 14:10:51 -- common/autotest_common.sh@133 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:09.932 14:10:51 -- common/autotest_common.sh@135 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:09.932 14:10:51 -- common/autotest_common.sh@137 -- # : 00:06:09.932 14:10:51 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:09.932 14:10:51 -- common/autotest_common.sh@139 -- # : true 00:06:09.932 14:10:51 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:09.932 14:10:51 -- common/autotest_common.sh@141 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:09.932 14:10:51 -- common/autotest_common.sh@143 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:09.932 14:10:51 -- common/autotest_common.sh@145 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:09.932 14:10:51 -- common/autotest_common.sh@147 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:09.932 14:10:51 -- common/autotest_common.sh@149 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:09.932 14:10:51 -- common/autotest_common.sh@151 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:09.932 14:10:51 -- common/autotest_common.sh@153 -- # : e810 00:06:09.932 14:10:51 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:09.932 14:10:51 -- common/autotest_common.sh@155 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:09.932 14:10:51 -- common/autotest_common.sh@157 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:09.932 14:10:51 -- common/autotest_common.sh@159 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:09.932 14:10:51 -- common/autotest_common.sh@161 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:09.932 14:10:51 -- common/autotest_common.sh@163 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:09.932 14:10:51 -- common/autotest_common.sh@166 -- # : 00:06:09.932 14:10:51 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:09.932 14:10:51 -- common/autotest_common.sh@168 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:09.932 14:10:51 -- common/autotest_common.sh@170 -- # : 0 00:06:09.932 14:10:51 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:09.932 14:10:51 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:09.932 14:10:51 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:09.932 14:10:51 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:09.932 14:10:51 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:09.932 14:10:51 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:09.932 14:10:51 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:09.932 14:10:51 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:09.932 14:10:51 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:09.932 14:10:51 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:09.932 14:10:51 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:09.932 14:10:51 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:09.932 14:10:51 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:09.932 14:10:51 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:09.932 14:10:51 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:09.932 14:10:51 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:09.932 14:10:51 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:09.932 14:10:51 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:09.932 14:10:51 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:09.932 14:10:51 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:09.932 14:10:51 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:09.932 14:10:51 -- common/autotest_common.sh@199 -- # cat 00:06:09.932 14:10:51 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:06:09.932 14:10:51 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:09.932 14:10:51 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:09.932 14:10:51 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:09.932 14:10:51 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:09.932 14:10:51 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:06:09.932 14:10:51 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:06:09.932 14:10:51 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:09.932 14:10:51 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:09.932 14:10:51 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:09.932 14:10:51 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:09.932 14:10:51 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:09.933 14:10:51 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:09.933 14:10:51 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:09.933 14:10:51 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:09.933 14:10:51 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:09.933 14:10:51 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:09.933 14:10:51 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:09.933 14:10:51 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:09.933 14:10:51 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:06:09.933 14:10:51 -- common/autotest_common.sh@252 -- # export valgrind= 00:06:09.933 14:10:51 -- common/autotest_common.sh@252 -- # valgrind= 00:06:09.933 14:10:51 -- common/autotest_common.sh@258 -- # uname -s 00:06:09.933 14:10:51 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:06:09.933 14:10:51 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:06:09.933 14:10:51 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:06:09.933 14:10:51 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:06:09.933 14:10:51 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:09.933 14:10:51 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:09.933 14:10:51 -- common/autotest_common.sh@268 -- # MAKE=make 00:06:09.933 14:10:51 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j32 00:06:09.933 14:10:51 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:06:09.933 14:10:51 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:06:09.933 14:10:51 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:06:09.933 14:10:51 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:06:09.933 14:10:51 -- common/autotest_common.sh@289 -- # for i in "$@" 00:06:09.933 14:10:51 -- common/autotest_common.sh@290 -- # case "$i" in 00:06:09.933 14:10:51 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:06:09.933 14:10:51 -- common/autotest_common.sh@307 -- # [[ -z 3075048 ]] 00:06:09.933 14:10:51 -- common/autotest_common.sh@307 -- # kill -0 3075048 00:06:09.933 14:10:51 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:06:09.933 14:10:51 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:06:09.933 14:10:51 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:06:09.933 14:10:51 -- common/autotest_common.sh@320 -- # local mount target_dir 00:06:09.933 14:10:51 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:06:09.933 14:10:51 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:06:09.933 14:10:51 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:06:09.933 14:10:51 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:06:09.933 14:10:51 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.OZhMgs 00:06:09.933 14:10:51 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:09.933 14:10:51 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:06:09.933 14:10:51 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:06:09.933 14:10:51 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.OZhMgs/tests/target /tmp/spdk.OZhMgs 00:06:09.933 14:10:51 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:06:09.933 14:10:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:09.933 14:10:51 -- common/autotest_common.sh@316 -- # df -T 00:06:09.933 14:10:51 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:06:09.933 14:10:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:06:09.933 14:10:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:06:09.933 14:10:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:06:09.933 14:10:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:06:09.933 14:10:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:06:09.933 14:10:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:09.933 14:10:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:06:09.933 14:10:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:06:09.933 14:10:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=1052192768 00:06:09.933 14:10:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:06:09.933 14:10:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=4232237056 00:06:09.933 14:10:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:09.933 14:10:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:06:09.933 14:10:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:06:09.933 14:10:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=42314424320 00:06:09.933 14:10:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=53546168320 00:06:09.933 14:10:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=11231744000 00:06:09.933 14:10:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:09.933 14:10:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:09.933 14:10:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:09.933 14:10:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=26770468864 00:06:09.933 14:10:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=26773082112 00:06:09.933 14:10:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:06:09.933 14:10:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:09.933 14:10:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:09.933 14:10:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:09.933 14:10:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=10700734464 00:06:09.933 14:10:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=10709233664 00:06:09.933 14:10:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=8499200 00:06:09.933 14:10:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:09.933 14:10:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:09.933 14:10:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:09.933 14:10:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=26772398080 00:06:09.933 14:10:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=26773086208 00:06:09.933 14:10:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=688128 00:06:09.933 14:10:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:09.933 14:10:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:09.933 14:10:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:09.933 14:10:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=5354610688 00:06:09.933 14:10:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5354614784 00:06:09.933 14:10:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:06:09.933 14:10:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:09.933 14:10:51 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:06:09.933 * Looking for test storage... 00:06:09.933 14:10:51 -- common/autotest_common.sh@357 -- # local target_space new_size 00:06:09.933 14:10:51 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:06:09.933 14:10:51 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:09.933 14:10:51 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:09.933 14:10:51 -- common/autotest_common.sh@361 -- # mount=/ 00:06:09.933 14:10:51 -- common/autotest_common.sh@363 -- # target_space=42314424320 00:06:09.933 14:10:51 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:06:09.933 14:10:51 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:06:09.933 14:10:51 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:06:09.933 14:10:51 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:06:09.933 14:10:51 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:06:09.933 14:10:51 -- common/autotest_common.sh@370 -- # new_size=13446336512 00:06:09.933 14:10:51 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:09.933 14:10:51 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:09.933 14:10:51 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:09.933 14:10:51 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:09.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:09.933 14:10:51 -- common/autotest_common.sh@378 -- # return 0 00:06:09.933 14:10:51 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:06:09.933 14:10:51 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:06:09.933 14:10:51 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:09.933 14:10:51 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:09.933 14:10:51 -- common/autotest_common.sh@1673 -- # true 00:06:09.933 14:10:51 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:06:09.933 14:10:51 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:09.933 14:10:51 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:09.933 14:10:51 -- common/autotest_common.sh@27 -- # exec 00:06:09.933 14:10:51 -- common/autotest_common.sh@29 -- # exec 00:06:09.933 14:10:51 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:09.933 14:10:51 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:09.933 14:10:51 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:09.933 14:10:51 -- common/autotest_common.sh@18 -- # set -x 00:06:09.933 14:10:51 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:09.933 14:10:51 -- nvmf/common.sh@7 -- # uname -s 00:06:09.933 14:10:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.933 14:10:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.933 14:10:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.933 14:10:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.933 14:10:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.933 14:10:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.933 14:10:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.933 14:10:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.933 14:10:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.933 14:10:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.933 14:10:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:06:09.933 14:10:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:06:09.933 14:10:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.933 14:10:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.934 14:10:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:09.934 14:10:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:09.934 14:10:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:09.934 14:10:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.934 14:10:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.934 14:10:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.934 14:10:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.934 14:10:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.934 14:10:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.934 14:10:51 -- paths/export.sh@5 -- # export PATH 00:06:09.934 14:10:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.934 14:10:51 -- nvmf/common.sh@47 -- # : 0 00:06:09.934 14:10:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:09.934 14:10:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:09.934 14:10:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:09.934 14:10:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.934 14:10:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.934 14:10:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:09.934 14:10:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:09.934 14:10:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:09.934 14:10:51 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:09.934 14:10:51 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:09.934 14:10:51 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:09.934 14:10:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:09.934 14:10:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:09.934 14:10:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:09.934 14:10:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:09.934 14:10:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:09.934 14:10:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:09.934 14:10:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:09.934 14:10:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.934 14:10:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:09.934 14:10:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:09.934 14:10:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:09.934 14:10:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.840 14:10:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:11.840 14:10:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:11.840 14:10:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:11.840 14:10:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:11.840 14:10:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:11.840 14:10:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:11.840 14:10:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:11.840 14:10:53 -- nvmf/common.sh@295 -- # net_devs=() 00:06:11.840 14:10:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:11.840 14:10:53 -- nvmf/common.sh@296 -- # e810=() 00:06:11.840 14:10:53 -- nvmf/common.sh@296 -- # local -ga e810 00:06:11.840 14:10:53 -- nvmf/common.sh@297 -- # x722=() 00:06:11.840 14:10:53 -- nvmf/common.sh@297 -- # local -ga x722 00:06:11.840 14:10:53 -- nvmf/common.sh@298 -- # mlx=() 00:06:11.840 14:10:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:11.840 14:10:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:11.840 14:10:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:11.840 14:10:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:11.840 14:10:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:11.840 14:10:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:11.840 14:10:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:11.840 14:10:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:11.840 14:10:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:11.840 14:10:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:11.840 14:10:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:11.840 14:10:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:11.840 14:10:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:11.840 14:10:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:11.840 14:10:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:11.840 14:10:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:11.840 14:10:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:11.840 14:10:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:11.840 14:10:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:11.840 14:10:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:06:11.840 Found 0000:08:00.0 (0x8086 - 0x159b) 00:06:11.840 14:10:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:11.840 14:10:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:11.841 14:10:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:11.841 14:10:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:11.841 14:10:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:11.841 14:10:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:11.841 14:10:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:06:11.841 Found 0000:08:00.1 (0x8086 - 0x159b) 00:06:11.841 14:10:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:11.841 14:10:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:11.841 14:10:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:11.841 14:10:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:11.841 14:10:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:11.841 14:10:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:11.841 14:10:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:11.841 14:10:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:11.841 14:10:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:11.841 14:10:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:11.841 14:10:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:11.841 14:10:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:11.841 14:10:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:06:11.841 Found net devices under 0000:08:00.0: cvl_0_0 00:06:11.841 14:10:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:11.841 14:10:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:11.841 14:10:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:11.841 14:10:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:11.841 14:10:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:11.841 14:10:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:06:11.841 Found net devices under 0000:08:00.1: cvl_0_1 00:06:11.841 14:10:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:11.841 14:10:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:11.841 14:10:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:11.841 14:10:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:11.841 14:10:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:11.841 14:10:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:11.841 14:10:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:11.841 14:10:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:11.841 14:10:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:11.841 14:10:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:11.841 14:10:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:11.841 14:10:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:11.841 14:10:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:11.841 14:10:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:11.841 14:10:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:11.841 14:10:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:11.841 14:10:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:11.841 14:10:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:11.841 14:10:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:11.841 14:10:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:11.841 14:10:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:11.841 14:10:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:11.841 14:10:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:11.841 14:10:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:11.841 14:10:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:11.841 14:10:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:11.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:11.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:06:11.841 00:06:11.841 --- 10.0.0.2 ping statistics --- 00:06:11.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:11.841 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:06:11.841 14:10:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:11.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:11.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:06:11.841 00:06:11.841 --- 10.0.0.1 ping statistics --- 00:06:11.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:11.841 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:06:11.841 14:10:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:11.841 14:10:53 -- nvmf/common.sh@411 -- # return 0 00:06:11.841 14:10:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:11.841 14:10:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:11.841 14:10:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:11.841 14:10:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:11.841 14:10:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:11.841 14:10:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:11.841 14:10:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:11.841 14:10:53 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:11.841 14:10:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:11.841 14:10:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.841 14:10:53 -- common/autotest_common.sh@10 -- # set +x 00:06:11.841 ************************************ 00:06:11.841 START TEST nvmf_filesystem_no_in_capsule 00:06:11.841 ************************************ 00:06:11.841 14:10:53 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:06:11.841 14:10:53 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:11.841 14:10:53 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:11.841 14:10:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:11.841 14:10:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:11.841 14:10:53 -- common/autotest_common.sh@10 -- # set +x 00:06:11.841 14:10:53 -- nvmf/common.sh@470 -- # nvmfpid=3076313 00:06:11.841 14:10:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:11.841 14:10:53 -- nvmf/common.sh@471 -- # waitforlisten 3076313 00:06:11.841 14:10:53 -- common/autotest_common.sh@817 -- # '[' -z 3076313 ']' 00:06:11.841 14:10:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.841 14:10:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:11.841 14:10:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.841 14:10:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:11.841 14:10:53 -- common/autotest_common.sh@10 -- # set +x 00:06:11.841 [2024-04-26 14:10:53.396361] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:06:11.841 [2024-04-26 14:10:53.396462] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:12.100 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.100 [2024-04-26 14:10:53.462774] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.100 [2024-04-26 14:10:53.583365] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:12.100 [2024-04-26 14:10:53.583428] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:12.100 [2024-04-26 14:10:53.583453] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:12.100 [2024-04-26 14:10:53.583475] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:12.100 [2024-04-26 14:10:53.583493] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:12.100 [2024-04-26 14:10:53.583593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.100 [2024-04-26 14:10:53.583655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.100 [2024-04-26 14:10:53.583710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.100 [2024-04-26 14:10:53.583718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.358 14:10:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:12.358 14:10:53 -- common/autotest_common.sh@850 -- # return 0 00:06:12.358 14:10:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:12.358 14:10:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:12.358 14:10:53 -- common/autotest_common.sh@10 -- # set +x 00:06:12.358 14:10:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:12.358 14:10:53 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:12.358 14:10:53 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:12.358 14:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:12.358 14:10:53 -- common/autotest_common.sh@10 -- # set +x 00:06:12.358 [2024-04-26 14:10:53.737317] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.358 14:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:12.358 14:10:53 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:12.358 14:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:12.358 14:10:53 -- common/autotest_common.sh@10 -- # set +x 00:06:12.358 Malloc1 00:06:12.358 14:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:12.358 14:10:53 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:12.358 14:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:12.358 14:10:53 -- common/autotest_common.sh@10 -- # set +x 00:06:12.358 14:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:12.358 14:10:53 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:12.358 14:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:12.358 14:10:53 -- common/autotest_common.sh@10 -- # set +x 00:06:12.358 14:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:12.358 14:10:53 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:12.358 14:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:12.358 14:10:53 -- common/autotest_common.sh@10 -- # set +x 00:06:12.358 [2024-04-26 14:10:53.903254] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:12.358 14:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:12.358 14:10:53 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:12.358 14:10:53 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:06:12.358 14:10:53 -- common/autotest_common.sh@1365 -- # local bdev_info 00:06:12.358 14:10:53 -- common/autotest_common.sh@1366 -- # local bs 00:06:12.358 14:10:53 -- common/autotest_common.sh@1367 -- # local nb 00:06:12.358 14:10:53 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:12.358 14:10:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:12.358 14:10:53 -- common/autotest_common.sh@10 -- # set +x 00:06:12.358 14:10:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:12.358 14:10:53 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:06:12.358 { 00:06:12.358 "name": "Malloc1", 00:06:12.358 "aliases": [ 00:06:12.358 "bb094e88-dfcc-4e49-a816-adec8c75b44e" 00:06:12.358 ], 00:06:12.358 "product_name": "Malloc disk", 00:06:12.358 "block_size": 512, 00:06:12.358 "num_blocks": 1048576, 00:06:12.358 "uuid": "bb094e88-dfcc-4e49-a816-adec8c75b44e", 00:06:12.358 "assigned_rate_limits": { 00:06:12.358 "rw_ios_per_sec": 0, 00:06:12.358 "rw_mbytes_per_sec": 0, 00:06:12.358 "r_mbytes_per_sec": 0, 00:06:12.358 "w_mbytes_per_sec": 0 00:06:12.358 }, 00:06:12.358 "claimed": true, 00:06:12.358 "claim_type": "exclusive_write", 00:06:12.358 "zoned": false, 00:06:12.358 "supported_io_types": { 00:06:12.358 "read": true, 00:06:12.358 "write": true, 00:06:12.358 "unmap": true, 00:06:12.359 "write_zeroes": true, 00:06:12.359 "flush": true, 00:06:12.359 "reset": true, 00:06:12.359 "compare": false, 00:06:12.359 "compare_and_write": false, 00:06:12.359 "abort": true, 00:06:12.359 "nvme_admin": false, 00:06:12.359 "nvme_io": false 00:06:12.359 }, 00:06:12.359 "memory_domains": [ 00:06:12.359 { 00:06:12.359 "dma_device_id": "system", 00:06:12.359 "dma_device_type": 1 00:06:12.359 }, 00:06:12.359 { 00:06:12.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:12.359 "dma_device_type": 2 00:06:12.359 } 00:06:12.359 ], 00:06:12.359 "driver_specific": {} 00:06:12.359 } 00:06:12.359 ]' 00:06:12.359 14:10:53 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:06:12.616 14:10:53 -- common/autotest_common.sh@1369 -- # bs=512 00:06:12.616 14:10:53 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:06:12.616 14:10:54 -- common/autotest_common.sh@1370 -- # nb=1048576 00:06:12.616 14:10:54 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:06:12.616 14:10:54 -- common/autotest_common.sh@1374 -- # echo 512 00:06:12.616 14:10:54 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:12.616 14:10:54 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:13.182 14:10:54 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:13.182 14:10:54 -- common/autotest_common.sh@1184 -- # local i=0 00:06:13.182 14:10:54 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:13.182 14:10:54 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:13.182 14:10:54 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:15.088 14:10:56 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:15.088 14:10:56 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:15.088 14:10:56 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:15.088 14:10:56 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:15.088 14:10:56 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:15.088 14:10:56 -- common/autotest_common.sh@1194 -- # return 0 00:06:15.088 14:10:56 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:15.088 14:10:56 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:15.088 14:10:56 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:15.088 14:10:56 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:15.088 14:10:56 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:15.088 14:10:56 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:15.088 14:10:56 -- setup/common.sh@80 -- # echo 536870912 00:06:15.088 14:10:56 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:15.088 14:10:56 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:15.088 14:10:56 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:15.088 14:10:56 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:15.345 14:10:56 -- target/filesystem.sh@69 -- # partprobe 00:06:16.277 14:10:57 -- target/filesystem.sh@70 -- # sleep 1 00:06:17.208 14:10:58 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:17.208 14:10:58 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:17.208 14:10:58 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:17.208 14:10:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.208 14:10:58 -- common/autotest_common.sh@10 -- # set +x 00:06:17.208 ************************************ 00:06:17.208 START TEST filesystem_ext4 00:06:17.208 ************************************ 00:06:17.208 14:10:58 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:17.208 14:10:58 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:17.208 14:10:58 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:17.208 14:10:58 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:17.208 14:10:58 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:17.208 14:10:58 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:17.208 14:10:58 -- common/autotest_common.sh@914 -- # local i=0 00:06:17.208 14:10:58 -- common/autotest_common.sh@915 -- # local force 00:06:17.208 14:10:58 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:17.208 14:10:58 -- common/autotest_common.sh@918 -- # force=-F 00:06:17.208 14:10:58 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:17.208 mke2fs 1.46.5 (30-Dec-2021) 00:06:17.466 Discarding device blocks: 0/522240 done 00:06:17.466 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:17.466 Filesystem UUID: 99ad962d-4b33-4f39-8abb-f2685600f102 00:06:17.466 Superblock backups stored on blocks: 00:06:17.466 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:17.466 00:06:17.466 Allocating group tables: 0/64 done 00:06:17.466 Writing inode tables: 0/64 done 00:06:20.077 Creating journal (8192 blocks): done 00:06:20.077 Writing superblocks and filesystem accounting information: 0/64 done 00:06:20.077 00:06:20.077 14:11:01 -- common/autotest_common.sh@931 -- # return 0 00:06:20.077 14:11:01 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:20.335 14:11:01 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:20.335 14:11:01 -- target/filesystem.sh@25 -- # sync 00:06:20.335 14:11:01 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:20.335 14:11:01 -- target/filesystem.sh@27 -- # sync 00:06:20.335 14:11:01 -- target/filesystem.sh@29 -- # i=0 00:06:20.335 14:11:01 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:20.335 14:11:01 -- target/filesystem.sh@37 -- # kill -0 3076313 00:06:20.335 14:11:01 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:20.335 14:11:01 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:20.335 14:11:01 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:20.335 14:11:01 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:20.335 00:06:20.335 real 0m3.109s 00:06:20.335 user 0m0.016s 00:06:20.335 sys 0m0.032s 00:06:20.335 14:11:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:20.335 14:11:01 -- common/autotest_common.sh@10 -- # set +x 00:06:20.335 ************************************ 00:06:20.335 END TEST filesystem_ext4 00:06:20.335 ************************************ 00:06:20.335 14:11:01 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:20.335 14:11:01 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:20.335 14:11:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.335 14:11:01 -- common/autotest_common.sh@10 -- # set +x 00:06:20.593 ************************************ 00:06:20.593 START TEST filesystem_btrfs 00:06:20.593 ************************************ 00:06:20.593 14:11:01 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:20.593 14:11:01 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:20.593 14:11:01 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:20.593 14:11:01 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:20.593 14:11:01 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:06:20.593 14:11:01 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:20.593 14:11:01 -- common/autotest_common.sh@914 -- # local i=0 00:06:20.593 14:11:01 -- common/autotest_common.sh@915 -- # local force 00:06:20.593 14:11:01 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:06:20.593 14:11:01 -- common/autotest_common.sh@920 -- # force=-f 00:06:20.593 14:11:01 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:20.593 btrfs-progs v6.6.2 00:06:20.593 See https://btrfs.readthedocs.io for more information. 00:06:20.593 00:06:20.593 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:20.593 NOTE: several default settings have changed in version 5.15, please make sure 00:06:20.593 this does not affect your deployments: 00:06:20.593 - DUP for metadata (-m dup) 00:06:20.593 - enabled no-holes (-O no-holes) 00:06:20.593 - enabled free-space-tree (-R free-space-tree) 00:06:20.593 00:06:20.593 Label: (null) 00:06:20.593 UUID: 6966bd32-8081-4205-ba78-a821aaec3f6d 00:06:20.593 Node size: 16384 00:06:20.593 Sector size: 4096 00:06:20.593 Filesystem size: 510.00MiB 00:06:20.593 Block group profiles: 00:06:20.593 Data: single 8.00MiB 00:06:20.593 Metadata: DUP 32.00MiB 00:06:20.593 System: DUP 8.00MiB 00:06:20.593 SSD detected: yes 00:06:20.593 Zoned device: no 00:06:20.593 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:20.593 Runtime features: free-space-tree 00:06:20.593 Checksum: crc32c 00:06:20.593 Number of devices: 1 00:06:20.593 Devices: 00:06:20.593 ID SIZE PATH 00:06:20.593 1 510.00MiB /dev/nvme0n1p1 00:06:20.593 00:06:20.593 14:11:02 -- common/autotest_common.sh@931 -- # return 0 00:06:20.593 14:11:02 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:21.529 14:11:02 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:21.529 14:11:02 -- target/filesystem.sh@25 -- # sync 00:06:21.529 14:11:02 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:21.529 14:11:02 -- target/filesystem.sh@27 -- # sync 00:06:21.529 14:11:02 -- target/filesystem.sh@29 -- # i=0 00:06:21.529 14:11:02 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:21.529 14:11:02 -- target/filesystem.sh@37 -- # kill -0 3076313 00:06:21.529 14:11:02 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:21.529 14:11:02 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:21.529 14:11:02 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:21.529 14:11:02 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:21.529 00:06:21.529 real 0m1.004s 00:06:21.529 user 0m0.016s 00:06:21.529 sys 0m0.040s 00:06:21.529 14:11:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:21.529 14:11:02 -- common/autotest_common.sh@10 -- # set +x 00:06:21.529 ************************************ 00:06:21.529 END TEST filesystem_btrfs 00:06:21.529 ************************************ 00:06:21.529 14:11:02 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:21.529 14:11:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:21.529 14:11:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.529 14:11:02 -- common/autotest_common.sh@10 -- # set +x 00:06:21.529 ************************************ 00:06:21.529 START TEST filesystem_xfs 00:06:21.529 ************************************ 00:06:21.529 14:11:03 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:06:21.529 14:11:03 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:21.529 14:11:03 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:21.529 14:11:03 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:21.529 14:11:03 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:06:21.529 14:11:03 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:21.529 14:11:03 -- common/autotest_common.sh@914 -- # local i=0 00:06:21.529 14:11:03 -- common/autotest_common.sh@915 -- # local force 00:06:21.529 14:11:03 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:06:21.529 14:11:03 -- common/autotest_common.sh@920 -- # force=-f 00:06:21.529 14:11:03 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:21.788 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:21.788 = sectsz=512 attr=2, projid32bit=1 00:06:21.788 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:21.788 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:21.788 data = bsize=4096 blocks=130560, imaxpct=25 00:06:21.788 = sunit=0 swidth=0 blks 00:06:21.788 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:21.788 log =internal log bsize=4096 blocks=16384, version=2 00:06:21.788 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:21.788 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:22.355 Discarding blocks...Done. 00:06:22.355 14:11:03 -- common/autotest_common.sh@931 -- # return 0 00:06:22.355 14:11:03 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:24.907 14:11:06 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:24.907 14:11:06 -- target/filesystem.sh@25 -- # sync 00:06:24.907 14:11:06 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:24.907 14:11:06 -- target/filesystem.sh@27 -- # sync 00:06:24.907 14:11:06 -- target/filesystem.sh@29 -- # i=0 00:06:24.907 14:11:06 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:24.907 14:11:06 -- target/filesystem.sh@37 -- # kill -0 3076313 00:06:24.907 14:11:06 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:24.907 14:11:06 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:24.907 14:11:06 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:24.907 14:11:06 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:24.907 00:06:24.907 real 0m3.342s 00:06:24.907 user 0m0.021s 00:06:24.907 sys 0m0.038s 00:06:24.907 14:11:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:24.907 14:11:06 -- common/autotest_common.sh@10 -- # set +x 00:06:24.907 ************************************ 00:06:24.907 END TEST filesystem_xfs 00:06:24.907 ************************************ 00:06:24.907 14:11:06 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:25.165 14:11:06 -- target/filesystem.sh@93 -- # sync 00:06:25.165 14:11:06 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:25.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:25.165 14:11:06 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:25.165 14:11:06 -- common/autotest_common.sh@1205 -- # local i=0 00:06:25.165 14:11:06 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:06:25.165 14:11:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:25.165 14:11:06 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:06:25.165 14:11:06 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:25.165 14:11:06 -- common/autotest_common.sh@1217 -- # return 0 00:06:25.165 14:11:06 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:25.165 14:11:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:25.165 14:11:06 -- common/autotest_common.sh@10 -- # set +x 00:06:25.165 14:11:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:25.165 14:11:06 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:25.165 14:11:06 -- target/filesystem.sh@101 -- # killprocess 3076313 00:06:25.165 14:11:06 -- common/autotest_common.sh@936 -- # '[' -z 3076313 ']' 00:06:25.165 14:11:06 -- common/autotest_common.sh@940 -- # kill -0 3076313 00:06:25.165 14:11:06 -- common/autotest_common.sh@941 -- # uname 00:06:25.165 14:11:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:25.165 14:11:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3076313 00:06:25.165 14:11:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:25.165 14:11:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:25.165 14:11:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3076313' 00:06:25.165 killing process with pid 3076313 00:06:25.165 14:11:06 -- common/autotest_common.sh@955 -- # kill 3076313 00:06:25.165 14:11:06 -- common/autotest_common.sh@960 -- # wait 3076313 00:06:25.731 14:11:07 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:25.731 00:06:25.731 real 0m13.690s 00:06:25.731 user 0m52.516s 00:06:25.731 sys 0m1.902s 00:06:25.731 14:11:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:25.731 14:11:07 -- common/autotest_common.sh@10 -- # set +x 00:06:25.731 ************************************ 00:06:25.731 END TEST nvmf_filesystem_no_in_capsule 00:06:25.731 ************************************ 00:06:25.731 14:11:07 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:25.731 14:11:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:25.731 14:11:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.731 14:11:07 -- common/autotest_common.sh@10 -- # set +x 00:06:25.731 ************************************ 00:06:25.731 START TEST nvmf_filesystem_in_capsule 00:06:25.731 ************************************ 00:06:25.731 14:11:07 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:06:25.731 14:11:07 -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:25.731 14:11:07 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:25.731 14:11:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:25.731 14:11:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:25.731 14:11:07 -- common/autotest_common.sh@10 -- # set +x 00:06:25.731 14:11:07 -- nvmf/common.sh@470 -- # nvmfpid=3077879 00:06:25.731 14:11:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:25.731 14:11:07 -- nvmf/common.sh@471 -- # waitforlisten 3077879 00:06:25.731 14:11:07 -- common/autotest_common.sh@817 -- # '[' -z 3077879 ']' 00:06:25.731 14:11:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.731 14:11:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:25.731 14:11:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.731 14:11:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:25.731 14:11:07 -- common/autotest_common.sh@10 -- # set +x 00:06:25.731 [2024-04-26 14:11:07.232926] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:06:25.731 [2024-04-26 14:11:07.233015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:25.731 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.731 [2024-04-26 14:11:07.297117] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:25.989 [2024-04-26 14:11:07.412194] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:25.989 [2024-04-26 14:11:07.412250] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:25.989 [2024-04-26 14:11:07.412265] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:25.989 [2024-04-26 14:11:07.412279] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:25.989 [2024-04-26 14:11:07.412291] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:25.989 [2024-04-26 14:11:07.412380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.989 [2024-04-26 14:11:07.412436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.989 [2024-04-26 14:11:07.412487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.989 [2024-04-26 14:11:07.412484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.989 14:11:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:25.989 14:11:07 -- common/autotest_common.sh@850 -- # return 0 00:06:25.989 14:11:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:25.989 14:11:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:25.989 14:11:07 -- common/autotest_common.sh@10 -- # set +x 00:06:25.989 14:11:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:25.989 14:11:07 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:25.989 14:11:07 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:25.989 14:11:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:25.989 14:11:07 -- common/autotest_common.sh@10 -- # set +x 00:06:26.247 [2024-04-26 14:11:07.561214] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.247 14:11:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:26.247 14:11:07 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:26.247 14:11:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:26.247 14:11:07 -- common/autotest_common.sh@10 -- # set +x 00:06:26.247 Malloc1 00:06:26.247 14:11:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:26.247 14:11:07 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:26.247 14:11:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:26.247 14:11:07 -- common/autotest_common.sh@10 -- # set +x 00:06:26.247 14:11:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:26.247 14:11:07 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:26.247 14:11:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:26.247 14:11:07 -- common/autotest_common.sh@10 -- # set +x 00:06:26.247 14:11:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:26.247 14:11:07 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:26.247 14:11:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:26.247 14:11:07 -- common/autotest_common.sh@10 -- # set +x 00:06:26.247 [2024-04-26 14:11:07.721849] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:26.247 14:11:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:26.247 14:11:07 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:26.247 14:11:07 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:06:26.247 14:11:07 -- common/autotest_common.sh@1365 -- # local bdev_info 00:06:26.247 14:11:07 -- common/autotest_common.sh@1366 -- # local bs 00:06:26.247 14:11:07 -- common/autotest_common.sh@1367 -- # local nb 00:06:26.247 14:11:07 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:26.247 14:11:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:26.247 14:11:07 -- common/autotest_common.sh@10 -- # set +x 00:06:26.247 14:11:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:26.247 14:11:07 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:06:26.247 { 00:06:26.247 "name": "Malloc1", 00:06:26.247 "aliases": [ 00:06:26.247 "4524e485-9cba-4c85-96fc-351fad31563f" 00:06:26.247 ], 00:06:26.247 "product_name": "Malloc disk", 00:06:26.247 "block_size": 512, 00:06:26.248 "num_blocks": 1048576, 00:06:26.248 "uuid": "4524e485-9cba-4c85-96fc-351fad31563f", 00:06:26.248 "assigned_rate_limits": { 00:06:26.248 "rw_ios_per_sec": 0, 00:06:26.248 "rw_mbytes_per_sec": 0, 00:06:26.248 "r_mbytes_per_sec": 0, 00:06:26.248 "w_mbytes_per_sec": 0 00:06:26.248 }, 00:06:26.248 "claimed": true, 00:06:26.248 "claim_type": "exclusive_write", 00:06:26.248 "zoned": false, 00:06:26.248 "supported_io_types": { 00:06:26.248 "read": true, 00:06:26.248 "write": true, 00:06:26.248 "unmap": true, 00:06:26.248 "write_zeroes": true, 00:06:26.248 "flush": true, 00:06:26.248 "reset": true, 00:06:26.248 "compare": false, 00:06:26.248 "compare_and_write": false, 00:06:26.248 "abort": true, 00:06:26.248 "nvme_admin": false, 00:06:26.248 "nvme_io": false 00:06:26.248 }, 00:06:26.248 "memory_domains": [ 00:06:26.248 { 00:06:26.248 "dma_device_id": "system", 00:06:26.248 "dma_device_type": 1 00:06:26.248 }, 00:06:26.248 { 00:06:26.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.248 "dma_device_type": 2 00:06:26.248 } 00:06:26.248 ], 00:06:26.248 "driver_specific": {} 00:06:26.248 } 00:06:26.248 ]' 00:06:26.248 14:11:07 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:06:26.248 14:11:07 -- common/autotest_common.sh@1369 -- # bs=512 00:06:26.248 14:11:07 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:06:26.505 14:11:07 -- common/autotest_common.sh@1370 -- # nb=1048576 00:06:26.505 14:11:07 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:06:26.505 14:11:07 -- common/autotest_common.sh@1374 -- # echo 512 00:06:26.505 14:11:07 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:26.505 14:11:07 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:26.763 14:11:08 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:26.763 14:11:08 -- common/autotest_common.sh@1184 -- # local i=0 00:06:26.763 14:11:08 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:26.763 14:11:08 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:26.763 14:11:08 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:28.662 14:11:10 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:28.662 14:11:10 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:28.662 14:11:10 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:28.920 14:11:10 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:28.920 14:11:10 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:28.920 14:11:10 -- common/autotest_common.sh@1194 -- # return 0 00:06:28.920 14:11:10 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:28.920 14:11:10 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:28.920 14:11:10 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:28.920 14:11:10 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:28.920 14:11:10 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:28.920 14:11:10 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:28.920 14:11:10 -- setup/common.sh@80 -- # echo 536870912 00:06:28.920 14:11:10 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:28.920 14:11:10 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:28.920 14:11:10 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:28.920 14:11:10 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:28.920 14:11:10 -- target/filesystem.sh@69 -- # partprobe 00:06:29.486 14:11:10 -- target/filesystem.sh@70 -- # sleep 1 00:06:30.860 14:11:11 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:30.860 14:11:11 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:30.860 14:11:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:30.860 14:11:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.860 14:11:11 -- common/autotest_common.sh@10 -- # set +x 00:06:30.860 ************************************ 00:06:30.860 START TEST filesystem_in_capsule_ext4 00:06:30.860 ************************************ 00:06:30.860 14:11:12 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:30.860 14:11:12 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:30.860 14:11:12 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:30.860 14:11:12 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:30.860 14:11:12 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:30.860 14:11:12 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:30.860 14:11:12 -- common/autotest_common.sh@914 -- # local i=0 00:06:30.860 14:11:12 -- common/autotest_common.sh@915 -- # local force 00:06:30.860 14:11:12 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:30.860 14:11:12 -- common/autotest_common.sh@918 -- # force=-F 00:06:30.860 14:11:12 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:30.860 mke2fs 1.46.5 (30-Dec-2021) 00:06:30.860 Discarding device blocks: 0/522240 done 00:06:30.860 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:30.860 Filesystem UUID: 246a206e-549f-4f32-be7e-315943c91d2e 00:06:30.860 Superblock backups stored on blocks: 00:06:30.860 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:30.860 00:06:30.860 Allocating group tables: 0/64 done 00:06:30.860 Writing inode tables: 0/64 done 00:06:30.861 Creating journal (8192 blocks): done 00:06:30.861 Writing superblocks and filesystem accounting information: 0/64 done 00:06:30.861 00:06:30.861 14:11:12 -- common/autotest_common.sh@931 -- # return 0 00:06:30.861 14:11:12 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:30.861 14:11:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:30.861 14:11:12 -- target/filesystem.sh@25 -- # sync 00:06:30.861 14:11:12 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:30.861 14:11:12 -- target/filesystem.sh@27 -- # sync 00:06:30.861 14:11:12 -- target/filesystem.sh@29 -- # i=0 00:06:30.861 14:11:12 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:31.119 14:11:12 -- target/filesystem.sh@37 -- # kill -0 3077879 00:06:31.119 14:11:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:31.119 14:11:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:31.119 14:11:12 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:31.119 14:11:12 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:31.119 00:06:31.119 real 0m0.365s 00:06:31.119 user 0m0.016s 00:06:31.119 sys 0m0.038s 00:06:31.119 14:11:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.119 14:11:12 -- common/autotest_common.sh@10 -- # set +x 00:06:31.119 ************************************ 00:06:31.119 END TEST filesystem_in_capsule_ext4 00:06:31.119 ************************************ 00:06:31.119 14:11:12 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:31.119 14:11:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:31.119 14:11:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.119 14:11:12 -- common/autotest_common.sh@10 -- # set +x 00:06:31.119 ************************************ 00:06:31.119 START TEST filesystem_in_capsule_btrfs 00:06:31.119 ************************************ 00:06:31.119 14:11:12 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:31.119 14:11:12 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:31.119 14:11:12 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:31.119 14:11:12 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:31.119 14:11:12 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:06:31.119 14:11:12 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:31.119 14:11:12 -- common/autotest_common.sh@914 -- # local i=0 00:06:31.119 14:11:12 -- common/autotest_common.sh@915 -- # local force 00:06:31.119 14:11:12 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:06:31.119 14:11:12 -- common/autotest_common.sh@920 -- # force=-f 00:06:31.119 14:11:12 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:31.119 btrfs-progs v6.6.2 00:06:31.119 See https://btrfs.readthedocs.io for more information. 00:06:31.119 00:06:31.119 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:31.119 NOTE: several default settings have changed in version 5.15, please make sure 00:06:31.119 this does not affect your deployments: 00:06:31.119 - DUP for metadata (-m dup) 00:06:31.119 - enabled no-holes (-O no-holes) 00:06:31.119 - enabled free-space-tree (-R free-space-tree) 00:06:31.119 00:06:31.119 Label: (null) 00:06:31.119 UUID: a4bb3f87-f9e5-4d1f-a748-9dd051776061 00:06:31.119 Node size: 16384 00:06:31.119 Sector size: 4096 00:06:31.119 Filesystem size: 510.00MiB 00:06:31.119 Block group profiles: 00:06:31.119 Data: single 8.00MiB 00:06:31.119 Metadata: DUP 32.00MiB 00:06:31.119 System: DUP 8.00MiB 00:06:31.119 SSD detected: yes 00:06:31.119 Zoned device: no 00:06:31.119 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:31.119 Runtime features: free-space-tree 00:06:31.119 Checksum: crc32c 00:06:31.119 Number of devices: 1 00:06:31.119 Devices: 00:06:31.119 ID SIZE PATH 00:06:31.119 1 510.00MiB /dev/nvme0n1p1 00:06:31.119 00:06:31.119 14:11:12 -- common/autotest_common.sh@931 -- # return 0 00:06:31.119 14:11:12 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:31.378 14:11:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:31.378 14:11:12 -- target/filesystem.sh@25 -- # sync 00:06:31.378 14:11:12 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:31.378 14:11:12 -- target/filesystem.sh@27 -- # sync 00:06:31.378 14:11:12 -- target/filesystem.sh@29 -- # i=0 00:06:31.378 14:11:12 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:31.378 14:11:12 -- target/filesystem.sh@37 -- # kill -0 3077879 00:06:31.378 14:11:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:31.378 14:11:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:31.378 14:11:12 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:31.378 14:11:12 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:31.378 00:06:31.378 real 0m0.238s 00:06:31.378 user 0m0.014s 00:06:31.378 sys 0m0.043s 00:06:31.378 14:11:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.378 14:11:12 -- common/autotest_common.sh@10 -- # set +x 00:06:31.378 ************************************ 00:06:31.378 END TEST filesystem_in_capsule_btrfs 00:06:31.378 ************************************ 00:06:31.378 14:11:12 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:31.378 14:11:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:31.378 14:11:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.378 14:11:12 -- common/autotest_common.sh@10 -- # set +x 00:06:31.378 ************************************ 00:06:31.378 START TEST filesystem_in_capsule_xfs 00:06:31.378 ************************************ 00:06:31.378 14:11:12 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:06:31.378 14:11:12 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:31.378 14:11:12 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:31.378 14:11:12 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:31.378 14:11:12 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:06:31.378 14:11:12 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:31.378 14:11:12 -- common/autotest_common.sh@914 -- # local i=0 00:06:31.378 14:11:12 -- common/autotest_common.sh@915 -- # local force 00:06:31.378 14:11:12 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:06:31.378 14:11:12 -- common/autotest_common.sh@920 -- # force=-f 00:06:31.378 14:11:12 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:31.636 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:31.636 = sectsz=512 attr=2, projid32bit=1 00:06:31.636 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:31.636 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:31.636 data = bsize=4096 blocks=130560, imaxpct=25 00:06:31.636 = sunit=0 swidth=0 blks 00:06:31.636 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:31.636 log =internal log bsize=4096 blocks=16384, version=2 00:06:31.636 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:31.636 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:32.209 Discarding blocks...Done. 00:06:32.209 14:11:13 -- common/autotest_common.sh@931 -- # return 0 00:06:32.209 14:11:13 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:34.737 14:11:16 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:34.737 14:11:16 -- target/filesystem.sh@25 -- # sync 00:06:34.737 14:11:16 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:34.737 14:11:16 -- target/filesystem.sh@27 -- # sync 00:06:34.996 14:11:16 -- target/filesystem.sh@29 -- # i=0 00:06:34.996 14:11:16 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:34.996 14:11:16 -- target/filesystem.sh@37 -- # kill -0 3077879 00:06:34.996 14:11:16 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:34.996 14:11:16 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:34.996 14:11:16 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:34.996 14:11:16 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:34.996 00:06:34.996 real 0m3.395s 00:06:34.996 user 0m0.012s 00:06:34.996 sys 0m0.048s 00:06:34.996 14:11:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.996 14:11:16 -- common/autotest_common.sh@10 -- # set +x 00:06:34.996 ************************************ 00:06:34.996 END TEST filesystem_in_capsule_xfs 00:06:34.996 ************************************ 00:06:34.996 14:11:16 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:34.996 14:11:16 -- target/filesystem.sh@93 -- # sync 00:06:34.996 14:11:16 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:34.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:34.996 14:11:16 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:34.996 14:11:16 -- common/autotest_common.sh@1205 -- # local i=0 00:06:34.996 14:11:16 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:06:34.996 14:11:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:34.996 14:11:16 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:06:34.996 14:11:16 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:34.997 14:11:16 -- common/autotest_common.sh@1217 -- # return 0 00:06:34.997 14:11:16 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:34.997 14:11:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.997 14:11:16 -- common/autotest_common.sh@10 -- # set +x 00:06:34.997 14:11:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.997 14:11:16 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:34.997 14:11:16 -- target/filesystem.sh@101 -- # killprocess 3077879 00:06:34.997 14:11:16 -- common/autotest_common.sh@936 -- # '[' -z 3077879 ']' 00:06:34.997 14:11:16 -- common/autotest_common.sh@940 -- # kill -0 3077879 00:06:34.997 14:11:16 -- common/autotest_common.sh@941 -- # uname 00:06:34.997 14:11:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:34.997 14:11:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3077879 00:06:34.997 14:11:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:34.997 14:11:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:34.997 14:11:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3077879' 00:06:34.997 killing process with pid 3077879 00:06:34.997 14:11:16 -- common/autotest_common.sh@955 -- # kill 3077879 00:06:34.997 14:11:16 -- common/autotest_common.sh@960 -- # wait 3077879 00:06:35.565 14:11:16 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:35.565 00:06:35.565 real 0m9.724s 00:06:35.565 user 0m37.087s 00:06:35.565 sys 0m1.604s 00:06:35.565 14:11:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:35.565 14:11:16 -- common/autotest_common.sh@10 -- # set +x 00:06:35.565 ************************************ 00:06:35.565 END TEST nvmf_filesystem_in_capsule 00:06:35.565 ************************************ 00:06:35.565 14:11:16 -- target/filesystem.sh@108 -- # nvmftestfini 00:06:35.565 14:11:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:35.565 14:11:16 -- nvmf/common.sh@117 -- # sync 00:06:35.565 14:11:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:35.566 14:11:16 -- nvmf/common.sh@120 -- # set +e 00:06:35.566 14:11:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:35.566 14:11:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:35.566 rmmod nvme_tcp 00:06:35.566 rmmod nvme_fabrics 00:06:35.566 rmmod nvme_keyring 00:06:35.566 14:11:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:35.566 14:11:16 -- nvmf/common.sh@124 -- # set -e 00:06:35.566 14:11:16 -- nvmf/common.sh@125 -- # return 0 00:06:35.566 14:11:16 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:06:35.566 14:11:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:35.566 14:11:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:35.566 14:11:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:35.566 14:11:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:35.566 14:11:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:35.566 14:11:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.566 14:11:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:35.566 14:11:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.475 14:11:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:37.475 00:06:37.475 real 0m27.740s 00:06:37.475 user 1m30.442s 00:06:37.475 sys 0m4.951s 00:06:37.475 14:11:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.475 14:11:19 -- common/autotest_common.sh@10 -- # set +x 00:06:37.475 ************************************ 00:06:37.475 END TEST nvmf_filesystem 00:06:37.475 ************************************ 00:06:37.736 14:11:19 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:37.736 14:11:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:37.736 14:11:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.736 14:11:19 -- common/autotest_common.sh@10 -- # set +x 00:06:37.736 ************************************ 00:06:37.736 START TEST nvmf_discovery 00:06:37.736 ************************************ 00:06:37.736 14:11:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:37.736 * Looking for test storage... 00:06:37.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:37.736 14:11:19 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.736 14:11:19 -- nvmf/common.sh@7 -- # uname -s 00:06:37.736 14:11:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.736 14:11:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.736 14:11:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.736 14:11:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.736 14:11:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.736 14:11:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.736 14:11:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.736 14:11:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.736 14:11:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.736 14:11:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.736 14:11:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:06:37.736 14:11:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:06:37.736 14:11:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.736 14:11:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.736 14:11:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:37.736 14:11:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.736 14:11:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.736 14:11:19 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.736 14:11:19 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.736 14:11:19 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.736 14:11:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.736 14:11:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.736 14:11:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.736 14:11:19 -- paths/export.sh@5 -- # export PATH 00:06:37.736 14:11:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.736 14:11:19 -- nvmf/common.sh@47 -- # : 0 00:06:37.737 14:11:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:37.737 14:11:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:37.737 14:11:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.737 14:11:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.737 14:11:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.737 14:11:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:37.737 14:11:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:37.737 14:11:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:37.737 14:11:19 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:37.737 14:11:19 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:37.737 14:11:19 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:37.737 14:11:19 -- target/discovery.sh@15 -- # hash nvme 00:06:37.737 14:11:19 -- target/discovery.sh@20 -- # nvmftestinit 00:06:37.737 14:11:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:37.737 14:11:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:37.737 14:11:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:37.737 14:11:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:37.737 14:11:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:37.737 14:11:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.737 14:11:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:37.737 14:11:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.737 14:11:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:37.737 14:11:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:37.737 14:11:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:37.737 14:11:19 -- common/autotest_common.sh@10 -- # set +x 00:06:39.647 14:11:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:39.647 14:11:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:39.647 14:11:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:39.647 14:11:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:39.647 14:11:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:39.647 14:11:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:39.647 14:11:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:39.647 14:11:20 -- nvmf/common.sh@295 -- # net_devs=() 00:06:39.647 14:11:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:39.647 14:11:20 -- nvmf/common.sh@296 -- # e810=() 00:06:39.647 14:11:20 -- nvmf/common.sh@296 -- # local -ga e810 00:06:39.647 14:11:20 -- nvmf/common.sh@297 -- # x722=() 00:06:39.647 14:11:20 -- nvmf/common.sh@297 -- # local -ga x722 00:06:39.647 14:11:20 -- nvmf/common.sh@298 -- # mlx=() 00:06:39.647 14:11:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:39.647 14:11:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:39.647 14:11:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:39.647 14:11:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:39.647 14:11:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:39.647 14:11:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:39.647 14:11:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:39.647 14:11:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:39.647 14:11:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:39.647 14:11:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:39.647 14:11:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:39.647 14:11:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:39.647 14:11:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:39.647 14:11:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:39.647 14:11:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:39.647 14:11:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:39.647 14:11:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:39.647 14:11:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:39.647 14:11:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:39.647 14:11:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:06:39.647 Found 0000:08:00.0 (0x8086 - 0x159b) 00:06:39.647 14:11:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:39.647 14:11:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:39.647 14:11:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.647 14:11:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.647 14:11:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:39.647 14:11:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:39.647 14:11:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:06:39.647 Found 0000:08:00.1 (0x8086 - 0x159b) 00:06:39.647 14:11:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:39.647 14:11:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:39.647 14:11:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.647 14:11:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.647 14:11:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:39.647 14:11:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:39.647 14:11:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:39.647 14:11:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:39.647 14:11:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:39.647 14:11:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.647 14:11:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:39.647 14:11:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.647 14:11:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:06:39.647 Found net devices under 0000:08:00.0: cvl_0_0 00:06:39.647 14:11:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.647 14:11:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:39.647 14:11:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.647 14:11:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:39.647 14:11:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.647 14:11:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:06:39.647 Found net devices under 0000:08:00.1: cvl_0_1 00:06:39.647 14:11:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.647 14:11:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:39.647 14:11:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:39.647 14:11:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:39.647 14:11:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:39.647 14:11:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:39.647 14:11:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:39.647 14:11:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:39.647 14:11:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:39.647 14:11:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:39.647 14:11:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:39.647 14:11:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:39.647 14:11:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:39.647 14:11:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:39.647 14:11:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:39.647 14:11:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:39.647 14:11:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:39.647 14:11:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:39.647 14:11:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:39.647 14:11:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:39.647 14:11:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:39.647 14:11:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:39.647 14:11:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:39.647 14:11:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:39.647 14:11:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:39.647 14:11:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:39.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:39.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:06:39.647 00:06:39.647 --- 10.0.0.2 ping statistics --- 00:06:39.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.647 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:06:39.647 14:11:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:39.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:39.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:06:39.647 00:06:39.647 --- 10.0.0.1 ping statistics --- 00:06:39.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.647 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:06:39.647 14:11:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:39.647 14:11:20 -- nvmf/common.sh@411 -- # return 0 00:06:39.647 14:11:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:39.647 14:11:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:39.647 14:11:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:39.647 14:11:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:39.647 14:11:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:39.647 14:11:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:39.647 14:11:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:39.647 14:11:20 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:39.647 14:11:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:39.647 14:11:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:39.647 14:11:20 -- common/autotest_common.sh@10 -- # set +x 00:06:39.647 14:11:20 -- nvmf/common.sh@470 -- # nvmfpid=3080420 00:06:39.647 14:11:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:39.647 14:11:20 -- nvmf/common.sh@471 -- # waitforlisten 3080420 00:06:39.647 14:11:20 -- common/autotest_common.sh@817 -- # '[' -z 3080420 ']' 00:06:39.647 14:11:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.647 14:11:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:39.647 14:11:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.647 14:11:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:39.647 14:11:20 -- common/autotest_common.sh@10 -- # set +x 00:06:39.647 [2024-04-26 14:11:21.037907] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:06:39.647 [2024-04-26 14:11:21.038009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.647 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.647 [2024-04-26 14:11:21.103944] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.906 [2024-04-26 14:11:21.220341] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:39.906 [2024-04-26 14:11:21.220395] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:39.906 [2024-04-26 14:11:21.220410] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.906 [2024-04-26 14:11:21.220424] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.906 [2024-04-26 14:11:21.220436] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:39.906 [2024-04-26 14:11:21.220503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.906 [2024-04-26 14:11:21.220568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.906 [2024-04-26 14:11:21.220675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.906 [2024-04-26 14:11:21.220679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.907 14:11:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:39.907 14:11:21 -- common/autotest_common.sh@850 -- # return 0 00:06:39.907 14:11:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:39.907 14:11:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:39.907 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:39.907 14:11:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:39.907 14:11:21 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:39.907 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.907 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:39.907 [2024-04-26 14:11:21.367307] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.907 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.907 14:11:21 -- target/discovery.sh@26 -- # seq 1 4 00:06:39.907 14:11:21 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:39.907 14:11:21 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:39.907 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.907 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:39.907 Null1 00:06:39.907 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.907 14:11:21 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:39.907 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.907 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:39.907 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.907 14:11:21 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:39.907 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.907 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:39.907 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.907 14:11:21 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:39.907 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.907 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:39.907 [2024-04-26 14:11:21.407586] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:39.907 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.907 14:11:21 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:39.907 14:11:21 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:39.907 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.907 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:39.907 Null2 00:06:39.907 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.907 14:11:21 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:39.907 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.907 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:39.907 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.907 14:11:21 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:39.907 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.907 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:39.907 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.907 14:11:21 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:39.907 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.907 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:39.907 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.907 14:11:21 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:39.907 14:11:21 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:39.907 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.907 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:39.907 Null3 00:06:39.907 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.907 14:11:21 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:39.907 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.907 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:39.907 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.907 14:11:21 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:39.907 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.907 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:39.907 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:39.907 14:11:21 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:39.907 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:39.907 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:40.166 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:40.166 14:11:21 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:40.166 14:11:21 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:40.166 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:40.166 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:40.166 Null4 00:06:40.166 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:40.166 14:11:21 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:40.166 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:40.166 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:40.166 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:40.166 14:11:21 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:40.166 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:40.166 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:40.166 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:40.166 14:11:21 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:40.166 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:40.166 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:40.166 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:40.166 14:11:21 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:40.166 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:40.166 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:40.166 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:40.166 14:11:21 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:40.166 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:40.166 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:40.166 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:40.166 14:11:21 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 4420 00:06:40.166 00:06:40.166 Discovery Log Number of Records 6, Generation counter 6 00:06:40.166 =====Discovery Log Entry 0====== 00:06:40.166 trtype: tcp 00:06:40.166 adrfam: ipv4 00:06:40.166 subtype: current discovery subsystem 00:06:40.166 treq: not required 00:06:40.166 portid: 0 00:06:40.166 trsvcid: 4420 00:06:40.166 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:40.166 traddr: 10.0.0.2 00:06:40.166 eflags: explicit discovery connections, duplicate discovery information 00:06:40.166 sectype: none 00:06:40.166 =====Discovery Log Entry 1====== 00:06:40.166 trtype: tcp 00:06:40.166 adrfam: ipv4 00:06:40.166 subtype: nvme subsystem 00:06:40.166 treq: not required 00:06:40.166 portid: 0 00:06:40.166 trsvcid: 4420 00:06:40.166 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:40.166 traddr: 10.0.0.2 00:06:40.166 eflags: none 00:06:40.166 sectype: none 00:06:40.166 =====Discovery Log Entry 2====== 00:06:40.166 trtype: tcp 00:06:40.166 adrfam: ipv4 00:06:40.166 subtype: nvme subsystem 00:06:40.166 treq: not required 00:06:40.166 portid: 0 00:06:40.166 trsvcid: 4420 00:06:40.166 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:40.166 traddr: 10.0.0.2 00:06:40.166 eflags: none 00:06:40.166 sectype: none 00:06:40.166 =====Discovery Log Entry 3====== 00:06:40.166 trtype: tcp 00:06:40.166 adrfam: ipv4 00:06:40.166 subtype: nvme subsystem 00:06:40.166 treq: not required 00:06:40.166 portid: 0 00:06:40.166 trsvcid: 4420 00:06:40.166 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:40.166 traddr: 10.0.0.2 00:06:40.166 eflags: none 00:06:40.166 sectype: none 00:06:40.166 =====Discovery Log Entry 4====== 00:06:40.166 trtype: tcp 00:06:40.166 adrfam: ipv4 00:06:40.166 subtype: nvme subsystem 00:06:40.166 treq: not required 00:06:40.166 portid: 0 00:06:40.166 trsvcid: 4420 00:06:40.166 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:40.166 traddr: 10.0.0.2 00:06:40.166 eflags: none 00:06:40.166 sectype: none 00:06:40.166 =====Discovery Log Entry 5====== 00:06:40.166 trtype: tcp 00:06:40.166 adrfam: ipv4 00:06:40.166 subtype: discovery subsystem referral 00:06:40.166 treq: not required 00:06:40.166 portid: 0 00:06:40.166 trsvcid: 4430 00:06:40.166 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:40.166 traddr: 10.0.0.2 00:06:40.166 eflags: none 00:06:40.166 sectype: none 00:06:40.166 14:11:21 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:40.166 Perform nvmf subsystem discovery via RPC 00:06:40.166 14:11:21 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:40.166 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:40.166 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:40.166 [2024-04-26 14:11:21.591962] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:06:40.166 [ 00:06:40.166 { 00:06:40.166 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:40.166 "subtype": "Discovery", 00:06:40.166 "listen_addresses": [ 00:06:40.166 { 00:06:40.166 "transport": "TCP", 00:06:40.166 "trtype": "TCP", 00:06:40.166 "adrfam": "IPv4", 00:06:40.166 "traddr": "10.0.0.2", 00:06:40.166 "trsvcid": "4420" 00:06:40.166 } 00:06:40.166 ], 00:06:40.166 "allow_any_host": true, 00:06:40.166 "hosts": [] 00:06:40.166 }, 00:06:40.166 { 00:06:40.166 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:40.166 "subtype": "NVMe", 00:06:40.166 "listen_addresses": [ 00:06:40.166 { 00:06:40.166 "transport": "TCP", 00:06:40.166 "trtype": "TCP", 00:06:40.166 "adrfam": "IPv4", 00:06:40.166 "traddr": "10.0.0.2", 00:06:40.166 "trsvcid": "4420" 00:06:40.166 } 00:06:40.166 ], 00:06:40.166 "allow_any_host": true, 00:06:40.166 "hosts": [], 00:06:40.166 "serial_number": "SPDK00000000000001", 00:06:40.166 "model_number": "SPDK bdev Controller", 00:06:40.166 "max_namespaces": 32, 00:06:40.166 "min_cntlid": 1, 00:06:40.166 "max_cntlid": 65519, 00:06:40.166 "namespaces": [ 00:06:40.166 { 00:06:40.166 "nsid": 1, 00:06:40.166 "bdev_name": "Null1", 00:06:40.166 "name": "Null1", 00:06:40.166 "nguid": "D0366135B8DF48FEB51B17F54A49F44A", 00:06:40.166 "uuid": "d0366135-b8df-48fe-b51b-17f54a49f44a" 00:06:40.166 } 00:06:40.166 ] 00:06:40.166 }, 00:06:40.166 { 00:06:40.166 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:40.166 "subtype": "NVMe", 00:06:40.166 "listen_addresses": [ 00:06:40.166 { 00:06:40.166 "transport": "TCP", 00:06:40.166 "trtype": "TCP", 00:06:40.166 "adrfam": "IPv4", 00:06:40.166 "traddr": "10.0.0.2", 00:06:40.166 "trsvcid": "4420" 00:06:40.166 } 00:06:40.166 ], 00:06:40.166 "allow_any_host": true, 00:06:40.166 "hosts": [], 00:06:40.166 "serial_number": "SPDK00000000000002", 00:06:40.167 "model_number": "SPDK bdev Controller", 00:06:40.167 "max_namespaces": 32, 00:06:40.167 "min_cntlid": 1, 00:06:40.167 "max_cntlid": 65519, 00:06:40.167 "namespaces": [ 00:06:40.167 { 00:06:40.167 "nsid": 1, 00:06:40.167 "bdev_name": "Null2", 00:06:40.167 "name": "Null2", 00:06:40.167 "nguid": "A1DE63D510C84C5D9DA6F3C01DF99DA4", 00:06:40.167 "uuid": "a1de63d5-10c8-4c5d-9da6-f3c01df99da4" 00:06:40.167 } 00:06:40.167 ] 00:06:40.167 }, 00:06:40.167 { 00:06:40.167 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:40.167 "subtype": "NVMe", 00:06:40.167 "listen_addresses": [ 00:06:40.167 { 00:06:40.167 "transport": "TCP", 00:06:40.167 "trtype": "TCP", 00:06:40.167 "adrfam": "IPv4", 00:06:40.167 "traddr": "10.0.0.2", 00:06:40.167 "trsvcid": "4420" 00:06:40.167 } 00:06:40.167 ], 00:06:40.167 "allow_any_host": true, 00:06:40.167 "hosts": [], 00:06:40.167 "serial_number": "SPDK00000000000003", 00:06:40.167 "model_number": "SPDK bdev Controller", 00:06:40.167 "max_namespaces": 32, 00:06:40.167 "min_cntlid": 1, 00:06:40.167 "max_cntlid": 65519, 00:06:40.167 "namespaces": [ 00:06:40.167 { 00:06:40.167 "nsid": 1, 00:06:40.167 "bdev_name": "Null3", 00:06:40.167 "name": "Null3", 00:06:40.167 "nguid": "8407E7D946D846ACAF07B801A519869D", 00:06:40.167 "uuid": "8407e7d9-46d8-46ac-af07-b801a519869d" 00:06:40.167 } 00:06:40.167 ] 00:06:40.167 }, 00:06:40.167 { 00:06:40.167 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:40.167 "subtype": "NVMe", 00:06:40.167 "listen_addresses": [ 00:06:40.167 { 00:06:40.167 "transport": "TCP", 00:06:40.167 "trtype": "TCP", 00:06:40.167 "adrfam": "IPv4", 00:06:40.167 "traddr": "10.0.0.2", 00:06:40.167 "trsvcid": "4420" 00:06:40.167 } 00:06:40.167 ], 00:06:40.167 "allow_any_host": true, 00:06:40.167 "hosts": [], 00:06:40.167 "serial_number": "SPDK00000000000004", 00:06:40.167 "model_number": "SPDK bdev Controller", 00:06:40.167 "max_namespaces": 32, 00:06:40.167 "min_cntlid": 1, 00:06:40.167 "max_cntlid": 65519, 00:06:40.167 "namespaces": [ 00:06:40.167 { 00:06:40.167 "nsid": 1, 00:06:40.167 "bdev_name": "Null4", 00:06:40.167 "name": "Null4", 00:06:40.167 "nguid": "C7EAD2BE68AD4E69A1A22A29F8504938", 00:06:40.167 "uuid": "c7ead2be-68ad-4e69-a1a2-2a29f8504938" 00:06:40.167 } 00:06:40.167 ] 00:06:40.167 } 00:06:40.167 ] 00:06:40.167 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:40.167 14:11:21 -- target/discovery.sh@42 -- # seq 1 4 00:06:40.167 14:11:21 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:40.167 14:11:21 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:40.167 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:40.167 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:40.167 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:40.167 14:11:21 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:40.167 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:40.167 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:40.167 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:40.167 14:11:21 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:40.167 14:11:21 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:40.167 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:40.167 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:40.167 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:40.167 14:11:21 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:40.167 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:40.167 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:40.167 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:40.167 14:11:21 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:40.167 14:11:21 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:40.167 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:40.167 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:40.167 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:40.167 14:11:21 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:40.167 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:40.167 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:40.167 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:40.167 14:11:21 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:40.167 14:11:21 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:40.167 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:40.167 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:40.167 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:40.167 14:11:21 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:40.167 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:40.167 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:40.167 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:40.167 14:11:21 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:40.167 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:40.167 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:40.167 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:40.167 14:11:21 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:40.167 14:11:21 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:40.167 14:11:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:40.167 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:06:40.167 14:11:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:40.167 14:11:21 -- target/discovery.sh@49 -- # check_bdevs= 00:06:40.167 14:11:21 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:40.167 14:11:21 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:40.167 14:11:21 -- target/discovery.sh@57 -- # nvmftestfini 00:06:40.167 14:11:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:40.167 14:11:21 -- nvmf/common.sh@117 -- # sync 00:06:40.167 14:11:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:40.167 14:11:21 -- nvmf/common.sh@120 -- # set +e 00:06:40.167 14:11:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:40.167 14:11:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:40.167 rmmod nvme_tcp 00:06:40.427 rmmod nvme_fabrics 00:06:40.427 rmmod nvme_keyring 00:06:40.427 14:11:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:40.427 14:11:21 -- nvmf/common.sh@124 -- # set -e 00:06:40.427 14:11:21 -- nvmf/common.sh@125 -- # return 0 00:06:40.427 14:11:21 -- nvmf/common.sh@478 -- # '[' -n 3080420 ']' 00:06:40.427 14:11:21 -- nvmf/common.sh@479 -- # killprocess 3080420 00:06:40.427 14:11:21 -- common/autotest_common.sh@936 -- # '[' -z 3080420 ']' 00:06:40.427 14:11:21 -- common/autotest_common.sh@940 -- # kill -0 3080420 00:06:40.427 14:11:21 -- common/autotest_common.sh@941 -- # uname 00:06:40.427 14:11:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:40.427 14:11:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3080420 00:06:40.427 14:11:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:40.427 14:11:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:40.427 14:11:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3080420' 00:06:40.427 killing process with pid 3080420 00:06:40.427 14:11:21 -- common/autotest_common.sh@955 -- # kill 3080420 00:06:40.427 [2024-04-26 14:11:21.815691] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:06:40.427 14:11:21 -- common/autotest_common.sh@960 -- # wait 3080420 00:06:40.687 14:11:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:40.687 14:11:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:40.687 14:11:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:40.687 14:11:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:40.687 14:11:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:40.687 14:11:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.687 14:11:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:40.687 14:11:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.597 14:11:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:42.597 00:06:42.597 real 0m4.918s 00:06:42.597 user 0m3.824s 00:06:42.597 sys 0m1.557s 00:06:42.597 14:11:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:42.597 14:11:24 -- common/autotest_common.sh@10 -- # set +x 00:06:42.597 ************************************ 00:06:42.597 END TEST nvmf_discovery 00:06:42.597 ************************************ 00:06:42.597 14:11:24 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:42.597 14:11:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:42.597 14:11:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.597 14:11:24 -- common/autotest_common.sh@10 -- # set +x 00:06:42.856 ************************************ 00:06:42.856 START TEST nvmf_referrals 00:06:42.856 ************************************ 00:06:42.856 14:11:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:42.856 * Looking for test storage... 00:06:42.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.857 14:11:24 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.857 14:11:24 -- nvmf/common.sh@7 -- # uname -s 00:06:42.857 14:11:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.857 14:11:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.857 14:11:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.857 14:11:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.857 14:11:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.857 14:11:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.857 14:11:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.857 14:11:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.857 14:11:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.857 14:11:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.857 14:11:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:06:42.857 14:11:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:06:42.857 14:11:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.857 14:11:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.857 14:11:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.857 14:11:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.857 14:11:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.857 14:11:24 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.857 14:11:24 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.857 14:11:24 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.857 14:11:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.857 14:11:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.857 14:11:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.857 14:11:24 -- paths/export.sh@5 -- # export PATH 00:06:42.857 14:11:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.857 14:11:24 -- nvmf/common.sh@47 -- # : 0 00:06:42.857 14:11:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:42.857 14:11:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:42.857 14:11:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.857 14:11:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.857 14:11:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.857 14:11:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:42.857 14:11:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:42.857 14:11:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:42.857 14:11:24 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:42.857 14:11:24 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:42.857 14:11:24 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:42.857 14:11:24 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:42.857 14:11:24 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:42.857 14:11:24 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:42.857 14:11:24 -- target/referrals.sh@37 -- # nvmftestinit 00:06:42.857 14:11:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:42.857 14:11:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:42.857 14:11:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:42.857 14:11:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:42.857 14:11:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:42.857 14:11:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.857 14:11:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:42.857 14:11:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.857 14:11:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:42.857 14:11:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:42.857 14:11:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:42.857 14:11:24 -- common/autotest_common.sh@10 -- # set +x 00:06:44.763 14:11:25 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:44.763 14:11:25 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:44.763 14:11:25 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:44.763 14:11:25 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:44.763 14:11:25 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:44.763 14:11:25 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:44.763 14:11:25 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:44.763 14:11:25 -- nvmf/common.sh@295 -- # net_devs=() 00:06:44.763 14:11:25 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:44.763 14:11:25 -- nvmf/common.sh@296 -- # e810=() 00:06:44.763 14:11:25 -- nvmf/common.sh@296 -- # local -ga e810 00:06:44.763 14:11:25 -- nvmf/common.sh@297 -- # x722=() 00:06:44.763 14:11:25 -- nvmf/common.sh@297 -- # local -ga x722 00:06:44.763 14:11:25 -- nvmf/common.sh@298 -- # mlx=() 00:06:44.763 14:11:25 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:44.763 14:11:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:44.763 14:11:25 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:44.763 14:11:25 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:44.763 14:11:25 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:44.763 14:11:25 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:44.763 14:11:25 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:44.763 14:11:25 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:44.763 14:11:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:44.763 14:11:25 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:44.763 14:11:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:44.763 14:11:25 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:44.763 14:11:25 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:44.763 14:11:25 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:44.763 14:11:25 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:44.763 14:11:25 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:44.763 14:11:25 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:44.763 14:11:25 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:44.763 14:11:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:44.763 14:11:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:06:44.763 Found 0000:08:00.0 (0x8086 - 0x159b) 00:06:44.763 14:11:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:44.763 14:11:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:44.763 14:11:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.763 14:11:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.763 14:11:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:44.763 14:11:25 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:44.763 14:11:25 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:06:44.764 Found 0000:08:00.1 (0x8086 - 0x159b) 00:06:44.764 14:11:25 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:44.764 14:11:25 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:44.764 14:11:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.764 14:11:25 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.764 14:11:25 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:44.764 14:11:25 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:44.764 14:11:25 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:44.764 14:11:25 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:44.764 14:11:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:44.764 14:11:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.764 14:11:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:44.764 14:11:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.764 14:11:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:06:44.764 Found net devices under 0000:08:00.0: cvl_0_0 00:06:44.764 14:11:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.764 14:11:25 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:44.764 14:11:25 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.764 14:11:25 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:44.764 14:11:25 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.764 14:11:25 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:06:44.764 Found net devices under 0000:08:00.1: cvl_0_1 00:06:44.764 14:11:25 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.764 14:11:25 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:44.764 14:11:25 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:44.764 14:11:25 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:44.764 14:11:25 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:44.764 14:11:25 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:44.764 14:11:25 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:44.764 14:11:25 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:44.764 14:11:25 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:44.764 14:11:25 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:44.764 14:11:25 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:44.764 14:11:25 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:44.764 14:11:25 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:44.764 14:11:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:44.764 14:11:25 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:44.764 14:11:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:44.764 14:11:25 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:44.764 14:11:25 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:44.764 14:11:25 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:44.764 14:11:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:44.764 14:11:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:44.764 14:11:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:44.764 14:11:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:44.764 14:11:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:44.764 14:11:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:44.764 14:11:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:44.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:44.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:06:44.764 00:06:44.764 --- 10.0.0.2 ping statistics --- 00:06:44.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.764 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:06:44.764 14:11:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:44.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:44.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:06:44.764 00:06:44.764 --- 10.0.0.1 ping statistics --- 00:06:44.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.764 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:06:44.764 14:11:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:44.764 14:11:26 -- nvmf/common.sh@411 -- # return 0 00:06:44.764 14:11:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:44.764 14:11:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:44.764 14:11:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:44.764 14:11:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:44.764 14:11:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:44.764 14:11:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:44.764 14:11:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:44.764 14:11:26 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:44.764 14:11:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:44.764 14:11:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:44.764 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:06:44.764 14:11:26 -- nvmf/common.sh@470 -- # nvmfpid=3082056 00:06:44.764 14:11:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:44.764 14:11:26 -- nvmf/common.sh@471 -- # waitforlisten 3082056 00:06:44.764 14:11:26 -- common/autotest_common.sh@817 -- # '[' -z 3082056 ']' 00:06:44.764 14:11:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.764 14:11:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:44.764 14:11:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.764 14:11:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:44.764 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:06:44.764 [2024-04-26 14:11:26.144386] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:06:44.764 [2024-04-26 14:11:26.144492] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.764 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.764 [2024-04-26 14:11:26.211724] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:44.764 [2024-04-26 14:11:26.330127] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:44.764 [2024-04-26 14:11:26.330183] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:44.764 [2024-04-26 14:11:26.330199] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:44.764 [2024-04-26 14:11:26.330212] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:44.764 [2024-04-26 14:11:26.330224] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:44.764 [2024-04-26 14:11:26.330283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.764 [2024-04-26 14:11:26.330334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.764 [2024-04-26 14:11:26.330406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.764 [2024-04-26 14:11:26.330409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.023 14:11:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:45.023 14:11:26 -- common/autotest_common.sh@850 -- # return 0 00:06:45.023 14:11:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:45.023 14:11:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:45.023 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:06:45.023 14:11:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:45.023 14:11:26 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:45.023 14:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.023 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:06:45.023 [2024-04-26 14:11:26.485328] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.023 14:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.023 14:11:26 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:45.023 14:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.023 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:06:45.023 [2024-04-26 14:11:26.497556] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:45.023 14:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.023 14:11:26 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:45.023 14:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.023 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:06:45.023 14:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.023 14:11:26 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:45.023 14:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.023 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:06:45.023 14:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.023 14:11:26 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:45.023 14:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.023 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:06:45.023 14:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.023 14:11:26 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:45.023 14:11:26 -- target/referrals.sh@48 -- # jq length 00:06:45.023 14:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.023 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:06:45.023 14:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.023 14:11:26 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:45.023 14:11:26 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:45.023 14:11:26 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:45.023 14:11:26 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:45.023 14:11:26 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:45.023 14:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.023 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:06:45.023 14:11:26 -- target/referrals.sh@21 -- # sort 00:06:45.023 14:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.282 14:11:26 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:45.282 14:11:26 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:45.282 14:11:26 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:45.282 14:11:26 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:45.282 14:11:26 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:45.282 14:11:26 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:45.282 14:11:26 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:45.282 14:11:26 -- target/referrals.sh@26 -- # sort 00:06:45.282 14:11:26 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:45.282 14:11:26 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:45.282 14:11:26 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:45.282 14:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.282 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:06:45.282 14:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.282 14:11:26 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:45.282 14:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.282 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:06:45.282 14:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.282 14:11:26 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:45.282 14:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.282 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:06:45.282 14:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.282 14:11:26 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:45.282 14:11:26 -- target/referrals.sh@56 -- # jq length 00:06:45.282 14:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.282 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:06:45.282 14:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.282 14:11:26 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:45.282 14:11:26 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:45.282 14:11:26 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:45.282 14:11:26 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:45.282 14:11:26 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:45.282 14:11:26 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:45.282 14:11:26 -- target/referrals.sh@26 -- # sort 00:06:45.541 14:11:26 -- target/referrals.sh@26 -- # echo 00:06:45.541 14:11:26 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:45.541 14:11:26 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:45.541 14:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.541 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:06:45.541 14:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.541 14:11:26 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:45.541 14:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.541 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:06:45.541 14:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.541 14:11:26 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:45.541 14:11:26 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:45.541 14:11:26 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:45.541 14:11:26 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:45.541 14:11:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.541 14:11:26 -- common/autotest_common.sh@10 -- # set +x 00:06:45.541 14:11:26 -- target/referrals.sh@21 -- # sort 00:06:45.541 14:11:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.541 14:11:26 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:45.541 14:11:26 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:45.541 14:11:26 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:45.541 14:11:26 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:45.541 14:11:26 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:45.541 14:11:26 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:45.541 14:11:26 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:45.541 14:11:26 -- target/referrals.sh@26 -- # sort 00:06:45.541 14:11:27 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:45.541 14:11:27 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:45.541 14:11:27 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:45.541 14:11:27 -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:45.541 14:11:27 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:45.541 14:11:27 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:45.541 14:11:27 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:45.799 14:11:27 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:45.799 14:11:27 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:45.799 14:11:27 -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:45.799 14:11:27 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:45.799 14:11:27 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:45.799 14:11:27 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:45.799 14:11:27 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:45.799 14:11:27 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:45.799 14:11:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.799 14:11:27 -- common/autotest_common.sh@10 -- # set +x 00:06:45.799 14:11:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.799 14:11:27 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:45.799 14:11:27 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:45.799 14:11:27 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:45.799 14:11:27 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:45.799 14:11:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:45.799 14:11:27 -- common/autotest_common.sh@10 -- # set +x 00:06:45.799 14:11:27 -- target/referrals.sh@21 -- # sort 00:06:45.799 14:11:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:45.799 14:11:27 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:45.799 14:11:27 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:45.799 14:11:27 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:45.799 14:11:27 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:45.799 14:11:27 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:45.799 14:11:27 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:45.799 14:11:27 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:45.799 14:11:27 -- target/referrals.sh@26 -- # sort 00:06:46.057 14:11:27 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:46.057 14:11:27 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:46.057 14:11:27 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:46.057 14:11:27 -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:46.057 14:11:27 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:46.057 14:11:27 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:46.057 14:11:27 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:46.057 14:11:27 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:46.057 14:11:27 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:46.057 14:11:27 -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:46.057 14:11:27 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:46.057 14:11:27 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:46.057 14:11:27 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:46.316 14:11:27 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:46.316 14:11:27 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:46.316 14:11:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.316 14:11:27 -- common/autotest_common.sh@10 -- # set +x 00:06:46.316 14:11:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.316 14:11:27 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:46.316 14:11:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:46.316 14:11:27 -- target/referrals.sh@82 -- # jq length 00:06:46.316 14:11:27 -- common/autotest_common.sh@10 -- # set +x 00:06:46.316 14:11:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:46.316 14:11:27 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:46.316 14:11:27 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:46.316 14:11:27 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:46.316 14:11:27 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:46.316 14:11:27 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:46.316 14:11:27 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:46.316 14:11:27 -- target/referrals.sh@26 -- # sort 00:06:46.316 14:11:27 -- target/referrals.sh@26 -- # echo 00:06:46.316 14:11:27 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:46.316 14:11:27 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:46.316 14:11:27 -- target/referrals.sh@86 -- # nvmftestfini 00:06:46.316 14:11:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:46.316 14:11:27 -- nvmf/common.sh@117 -- # sync 00:06:46.316 14:11:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:46.316 14:11:27 -- nvmf/common.sh@120 -- # set +e 00:06:46.316 14:11:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:46.316 14:11:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:46.316 rmmod nvme_tcp 00:06:46.316 rmmod nvme_fabrics 00:06:46.316 rmmod nvme_keyring 00:06:46.316 14:11:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:46.316 14:11:27 -- nvmf/common.sh@124 -- # set -e 00:06:46.316 14:11:27 -- nvmf/common.sh@125 -- # return 0 00:06:46.316 14:11:27 -- nvmf/common.sh@478 -- # '[' -n 3082056 ']' 00:06:46.316 14:11:27 -- nvmf/common.sh@479 -- # killprocess 3082056 00:06:46.316 14:11:27 -- common/autotest_common.sh@936 -- # '[' -z 3082056 ']' 00:06:46.316 14:11:27 -- common/autotest_common.sh@940 -- # kill -0 3082056 00:06:46.316 14:11:27 -- common/autotest_common.sh@941 -- # uname 00:06:46.316 14:11:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:46.316 14:11:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3082056 00:06:46.574 14:11:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:46.574 14:11:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:46.574 14:11:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3082056' 00:06:46.574 killing process with pid 3082056 00:06:46.574 14:11:27 -- common/autotest_common.sh@955 -- # kill 3082056 00:06:46.574 14:11:27 -- common/autotest_common.sh@960 -- # wait 3082056 00:06:46.575 14:11:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:46.575 14:11:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:46.575 14:11:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:46.575 14:11:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:46.575 14:11:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:46.575 14:11:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.575 14:11:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:46.575 14:11:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.115 14:11:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:49.115 00:06:49.115 real 0m5.946s 00:06:49.115 user 0m8.343s 00:06:49.115 sys 0m1.704s 00:06:49.115 14:11:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:49.115 14:11:30 -- common/autotest_common.sh@10 -- # set +x 00:06:49.115 ************************************ 00:06:49.115 END TEST nvmf_referrals 00:06:49.115 ************************************ 00:06:49.115 14:11:30 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:49.115 14:11:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:49.115 14:11:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.115 14:11:30 -- common/autotest_common.sh@10 -- # set +x 00:06:49.115 ************************************ 00:06:49.115 START TEST nvmf_connect_disconnect 00:06:49.115 ************************************ 00:06:49.115 14:11:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:49.115 * Looking for test storage... 00:06:49.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.115 14:11:30 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.115 14:11:30 -- nvmf/common.sh@7 -- # uname -s 00:06:49.115 14:11:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.115 14:11:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.115 14:11:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.115 14:11:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.115 14:11:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.115 14:11:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.115 14:11:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.115 14:11:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.115 14:11:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.115 14:11:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.115 14:11:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:06:49.115 14:11:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:06:49.115 14:11:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.115 14:11:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.115 14:11:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.115 14:11:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.115 14:11:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.115 14:11:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.115 14:11:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.115 14:11:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.116 14:11:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.116 14:11:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.116 14:11:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.116 14:11:30 -- paths/export.sh@5 -- # export PATH 00:06:49.116 14:11:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.116 14:11:30 -- nvmf/common.sh@47 -- # : 0 00:06:49.116 14:11:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:49.116 14:11:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:49.116 14:11:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.116 14:11:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.116 14:11:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.116 14:11:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:49.116 14:11:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:49.116 14:11:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:49.116 14:11:30 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:49.116 14:11:30 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:49.116 14:11:30 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:49.116 14:11:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:49.116 14:11:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:49.116 14:11:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:49.116 14:11:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:49.116 14:11:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:49.116 14:11:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.116 14:11:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:49.116 14:11:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.116 14:11:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:49.116 14:11:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:49.116 14:11:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:49.116 14:11:30 -- common/autotest_common.sh@10 -- # set +x 00:06:50.492 14:11:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:50.492 14:11:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:50.492 14:11:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:50.492 14:11:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:50.492 14:11:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:50.492 14:11:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:50.492 14:11:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:50.492 14:11:32 -- nvmf/common.sh@295 -- # net_devs=() 00:06:50.492 14:11:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:50.492 14:11:32 -- nvmf/common.sh@296 -- # e810=() 00:06:50.492 14:11:32 -- nvmf/common.sh@296 -- # local -ga e810 00:06:50.492 14:11:32 -- nvmf/common.sh@297 -- # x722=() 00:06:50.492 14:11:32 -- nvmf/common.sh@297 -- # local -ga x722 00:06:50.492 14:11:32 -- nvmf/common.sh@298 -- # mlx=() 00:06:50.492 14:11:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:50.492 14:11:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:50.492 14:11:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:50.492 14:11:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:50.492 14:11:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:50.492 14:11:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:50.492 14:11:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:50.492 14:11:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:50.492 14:11:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:50.492 14:11:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:50.492 14:11:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:50.492 14:11:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:50.492 14:11:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:50.492 14:11:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:50.492 14:11:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:50.492 14:11:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:50.492 14:11:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:50.492 14:11:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:50.492 14:11:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:50.492 14:11:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:06:50.492 Found 0000:08:00.0 (0x8086 - 0x159b) 00:06:50.492 14:11:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:50.492 14:11:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:50.492 14:11:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.492 14:11:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.492 14:11:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:50.492 14:11:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:50.492 14:11:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:06:50.492 Found 0000:08:00.1 (0x8086 - 0x159b) 00:06:50.492 14:11:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:50.492 14:11:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:50.492 14:11:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.492 14:11:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.492 14:11:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:50.492 14:11:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:50.492 14:11:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:50.492 14:11:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:50.492 14:11:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:50.492 14:11:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.492 14:11:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:50.492 14:11:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.492 14:11:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:06:50.492 Found net devices under 0000:08:00.0: cvl_0_0 00:06:50.492 14:11:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.492 14:11:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:50.492 14:11:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.492 14:11:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:50.492 14:11:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.492 14:11:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:06:50.492 Found net devices under 0000:08:00.1: cvl_0_1 00:06:50.492 14:11:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.492 14:11:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:50.492 14:11:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:50.492 14:11:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:50.492 14:11:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:50.492 14:11:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:50.492 14:11:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.492 14:11:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:50.492 14:11:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:50.492 14:11:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:50.492 14:11:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:50.492 14:11:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:50.492 14:11:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:50.492 14:11:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:50.492 14:11:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.493 14:11:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:50.493 14:11:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:50.493 14:11:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:50.493 14:11:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:50.751 14:11:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:50.751 14:11:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:50.751 14:11:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:50.751 14:11:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:50.751 14:11:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:50.751 14:11:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:50.751 14:11:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:50.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:50.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:06:50.751 00:06:50.751 --- 10.0.0.2 ping statistics --- 00:06:50.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.751 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:06:50.751 14:11:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:50.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:50.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:06:50.752 00:06:50.752 --- 10.0.0.1 ping statistics --- 00:06:50.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.752 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:06:50.752 14:11:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:50.752 14:11:32 -- nvmf/common.sh@411 -- # return 0 00:06:50.752 14:11:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:50.752 14:11:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:50.752 14:11:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:50.752 14:11:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:50.752 14:11:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:50.752 14:11:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:50.752 14:11:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:50.752 14:11:32 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:50.752 14:11:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:50.752 14:11:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:50.752 14:11:32 -- common/autotest_common.sh@10 -- # set +x 00:06:50.752 14:11:32 -- nvmf/common.sh@470 -- # nvmfpid=3083770 00:06:50.752 14:11:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:50.752 14:11:32 -- nvmf/common.sh@471 -- # waitforlisten 3083770 00:06:50.752 14:11:32 -- common/autotest_common.sh@817 -- # '[' -z 3083770 ']' 00:06:50.752 14:11:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.752 14:11:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:50.752 14:11:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.752 14:11:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:50.752 14:11:32 -- common/autotest_common.sh@10 -- # set +x 00:06:50.752 [2024-04-26 14:11:32.227057] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:06:50.752 [2024-04-26 14:11:32.227157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.752 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.752 [2024-04-26 14:11:32.294106] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.011 [2024-04-26 14:11:32.412855] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.011 [2024-04-26 14:11:32.412916] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.011 [2024-04-26 14:11:32.412932] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.011 [2024-04-26 14:11:32.412945] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.011 [2024-04-26 14:11:32.412957] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.011 [2024-04-26 14:11:32.413034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.011 [2024-04-26 14:11:32.413098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.011 [2024-04-26 14:11:32.413148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.011 [2024-04-26 14:11:32.413152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.011 14:11:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:51.011 14:11:32 -- common/autotest_common.sh@850 -- # return 0 00:06:51.011 14:11:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:51.011 14:11:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:51.011 14:11:32 -- common/autotest_common.sh@10 -- # set +x 00:06:51.011 14:11:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.011 14:11:32 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:51.011 14:11:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:51.011 14:11:32 -- common/autotest_common.sh@10 -- # set +x 00:06:51.011 [2024-04-26 14:11:32.567335] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.011 14:11:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:51.011 14:11:32 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:51.011 14:11:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:51.011 14:11:32 -- common/autotest_common.sh@10 -- # set +x 00:06:51.269 14:11:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:51.269 14:11:32 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:51.269 14:11:32 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:51.269 14:11:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:51.269 14:11:32 -- common/autotest_common.sh@10 -- # set +x 00:06:51.269 14:11:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:51.269 14:11:32 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:51.269 14:11:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:51.269 14:11:32 -- common/autotest_common.sh@10 -- # set +x 00:06:51.269 14:11:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:51.269 14:11:32 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:51.269 14:11:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:51.269 14:11:32 -- common/autotest_common.sh@10 -- # set +x 00:06:51.269 [2024-04-26 14:11:32.616845] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.269 14:11:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:51.269 14:11:32 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:51.269 14:11:32 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:51.269 14:11:32 -- target/connect_disconnect.sh@34 -- # set +x 00:06:53.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:56.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:58.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:01.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:03.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:03.966 14:11:45 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:03.966 14:11:45 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:03.966 14:11:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:03.966 14:11:45 -- nvmf/common.sh@117 -- # sync 00:07:03.966 14:11:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:03.966 14:11:45 -- nvmf/common.sh@120 -- # set +e 00:07:03.966 14:11:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:03.966 14:11:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:03.966 rmmod nvme_tcp 00:07:03.966 rmmod nvme_fabrics 00:07:03.966 rmmod nvme_keyring 00:07:03.966 14:11:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:03.966 14:11:45 -- nvmf/common.sh@124 -- # set -e 00:07:03.966 14:11:45 -- nvmf/common.sh@125 -- # return 0 00:07:03.966 14:11:45 -- nvmf/common.sh@478 -- # '[' -n 3083770 ']' 00:07:03.966 14:11:45 -- nvmf/common.sh@479 -- # killprocess 3083770 00:07:03.966 14:11:45 -- common/autotest_common.sh@936 -- # '[' -z 3083770 ']' 00:07:03.966 14:11:45 -- common/autotest_common.sh@940 -- # kill -0 3083770 00:07:03.966 14:11:45 -- common/autotest_common.sh@941 -- # uname 00:07:03.966 14:11:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:03.966 14:11:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3083770 00:07:03.966 14:11:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:03.966 14:11:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:03.966 14:11:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3083770' 00:07:03.966 killing process with pid 3083770 00:07:03.966 14:11:45 -- common/autotest_common.sh@955 -- # kill 3083770 00:07:03.966 14:11:45 -- common/autotest_common.sh@960 -- # wait 3083770 00:07:03.966 14:11:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:03.966 14:11:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:03.966 14:11:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:03.966 14:11:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:03.966 14:11:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:03.966 14:11:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.966 14:11:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:03.966 14:11:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.509 14:11:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:06.509 00:07:06.509 real 0m17.257s 00:07:06.509 user 0m51.718s 00:07:06.509 sys 0m2.785s 00:07:06.509 14:11:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:06.509 14:11:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.509 ************************************ 00:07:06.509 END TEST nvmf_connect_disconnect 00:07:06.509 ************************************ 00:07:06.509 14:11:47 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:06.509 14:11:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:06.509 14:11:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.509 14:11:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.509 ************************************ 00:07:06.509 START TEST nvmf_multitarget 00:07:06.509 ************************************ 00:07:06.509 14:11:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:06.509 * Looking for test storage... 00:07:06.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.509 14:11:47 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.509 14:11:47 -- nvmf/common.sh@7 -- # uname -s 00:07:06.509 14:11:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.509 14:11:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.509 14:11:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.509 14:11:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.509 14:11:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.509 14:11:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.509 14:11:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.509 14:11:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.509 14:11:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.509 14:11:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.509 14:11:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:06.509 14:11:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:06.509 14:11:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.509 14:11:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.509 14:11:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.509 14:11:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.509 14:11:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.509 14:11:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.509 14:11:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.509 14:11:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.509 14:11:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.509 14:11:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.509 14:11:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.509 14:11:47 -- paths/export.sh@5 -- # export PATH 00:07:06.509 14:11:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.509 14:11:47 -- nvmf/common.sh@47 -- # : 0 00:07:06.509 14:11:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:06.509 14:11:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:06.509 14:11:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.509 14:11:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.509 14:11:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.509 14:11:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:06.509 14:11:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:06.509 14:11:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:06.509 14:11:47 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:06.509 14:11:47 -- target/multitarget.sh@15 -- # nvmftestinit 00:07:06.509 14:11:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:06.509 14:11:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.509 14:11:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:06.509 14:11:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:06.509 14:11:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:06.509 14:11:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.509 14:11:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:06.509 14:11:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.510 14:11:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:06.510 14:11:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:06.510 14:11:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:06.510 14:11:47 -- common/autotest_common.sh@10 -- # set +x 00:07:07.891 14:11:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:07.891 14:11:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:07.891 14:11:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:07.891 14:11:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:07.891 14:11:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:07.891 14:11:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:07.891 14:11:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:07.891 14:11:49 -- nvmf/common.sh@295 -- # net_devs=() 00:07:07.891 14:11:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:07.891 14:11:49 -- nvmf/common.sh@296 -- # e810=() 00:07:07.891 14:11:49 -- nvmf/common.sh@296 -- # local -ga e810 00:07:07.891 14:11:49 -- nvmf/common.sh@297 -- # x722=() 00:07:07.891 14:11:49 -- nvmf/common.sh@297 -- # local -ga x722 00:07:07.891 14:11:49 -- nvmf/common.sh@298 -- # mlx=() 00:07:07.891 14:11:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:07.891 14:11:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.891 14:11:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.891 14:11:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.891 14:11:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.891 14:11:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.891 14:11:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.891 14:11:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.891 14:11:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.891 14:11:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.891 14:11:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.891 14:11:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.891 14:11:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:07.891 14:11:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:07.891 14:11:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:07.891 14:11:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:07.891 14:11:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:07.891 14:11:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:07.891 14:11:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.891 14:11:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:07.891 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:07.891 14:11:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.891 14:11:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.891 14:11:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.891 14:11:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.891 14:11:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.891 14:11:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.891 14:11:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:07.891 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:07.891 14:11:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.891 14:11:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.891 14:11:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.891 14:11:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.891 14:11:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.891 14:11:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:07.891 14:11:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:07.891 14:11:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:07.891 14:11:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.891 14:11:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.891 14:11:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:07.891 14:11:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.892 14:11:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:07.892 Found net devices under 0000:08:00.0: cvl_0_0 00:07:07.892 14:11:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.892 14:11:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.892 14:11:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.892 14:11:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:07.892 14:11:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.892 14:11:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:07.892 Found net devices under 0000:08:00.1: cvl_0_1 00:07:07.892 14:11:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.892 14:11:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:07.892 14:11:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:07.892 14:11:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:07.892 14:11:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:07.892 14:11:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:07.892 14:11:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.892 14:11:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.892 14:11:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.892 14:11:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:07.892 14:11:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.892 14:11:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.892 14:11:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:07.892 14:11:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.892 14:11:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.892 14:11:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:07.892 14:11:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:07.892 14:11:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.892 14:11:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.892 14:11:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.892 14:11:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.892 14:11:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:07.892 14:11:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:08.151 14:11:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:08.151 14:11:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:08.151 14:11:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:08.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:07:08.151 00:07:08.151 --- 10.0.0.2 ping statistics --- 00:07:08.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.151 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:07:08.151 14:11:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:08.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:07:08.151 00:07:08.151 --- 10.0.0.1 ping statistics --- 00:07:08.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.151 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:07:08.151 14:11:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.151 14:11:49 -- nvmf/common.sh@411 -- # return 0 00:07:08.151 14:11:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:08.151 14:11:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.151 14:11:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:08.151 14:11:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:08.151 14:11:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.151 14:11:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:08.151 14:11:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:08.151 14:11:49 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:08.151 14:11:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:08.151 14:11:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:08.151 14:11:49 -- common/autotest_common.sh@10 -- # set +x 00:07:08.151 14:11:49 -- nvmf/common.sh@470 -- # nvmfpid=3086588 00:07:08.151 14:11:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:08.151 14:11:49 -- nvmf/common.sh@471 -- # waitforlisten 3086588 00:07:08.151 14:11:49 -- common/autotest_common.sh@817 -- # '[' -z 3086588 ']' 00:07:08.151 14:11:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.151 14:11:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:08.151 14:11:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.151 14:11:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:08.151 14:11:49 -- common/autotest_common.sh@10 -- # set +x 00:07:08.151 [2024-04-26 14:11:49.566260] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:07:08.151 [2024-04-26 14:11:49.566348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.151 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.151 [2024-04-26 14:11:49.630185] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.409 [2024-04-26 14:11:49.745884] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.409 [2024-04-26 14:11:49.745939] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.409 [2024-04-26 14:11:49.745956] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.409 [2024-04-26 14:11:49.745969] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.410 [2024-04-26 14:11:49.745981] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.410 [2024-04-26 14:11:49.746065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.410 [2024-04-26 14:11:49.746118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.410 [2024-04-26 14:11:49.746147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.410 [2024-04-26 14:11:49.746149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.410 14:11:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:08.410 14:11:49 -- common/autotest_common.sh@850 -- # return 0 00:07:08.410 14:11:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:08.410 14:11:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:08.410 14:11:49 -- common/autotest_common.sh@10 -- # set +x 00:07:08.410 14:11:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.410 14:11:49 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:08.410 14:11:49 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:08.410 14:11:49 -- target/multitarget.sh@21 -- # jq length 00:07:08.666 14:11:50 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:08.666 14:11:50 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:08.666 "nvmf_tgt_1" 00:07:08.666 14:11:50 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:08.924 "nvmf_tgt_2" 00:07:08.924 14:11:50 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:08.924 14:11:50 -- target/multitarget.sh@28 -- # jq length 00:07:08.924 14:11:50 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:08.924 14:11:50 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:09.182 true 00:07:09.182 14:11:50 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:09.182 true 00:07:09.182 14:11:50 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:09.182 14:11:50 -- target/multitarget.sh@35 -- # jq length 00:07:09.441 14:11:50 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:09.441 14:11:50 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:09.441 14:11:50 -- target/multitarget.sh@41 -- # nvmftestfini 00:07:09.441 14:11:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:09.441 14:11:50 -- nvmf/common.sh@117 -- # sync 00:07:09.441 14:11:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:09.441 14:11:50 -- nvmf/common.sh@120 -- # set +e 00:07:09.441 14:11:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:09.441 14:11:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:09.441 rmmod nvme_tcp 00:07:09.441 rmmod nvme_fabrics 00:07:09.441 rmmod nvme_keyring 00:07:09.441 14:11:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:09.441 14:11:50 -- nvmf/common.sh@124 -- # set -e 00:07:09.441 14:11:50 -- nvmf/common.sh@125 -- # return 0 00:07:09.441 14:11:50 -- nvmf/common.sh@478 -- # '[' -n 3086588 ']' 00:07:09.441 14:11:50 -- nvmf/common.sh@479 -- # killprocess 3086588 00:07:09.441 14:11:50 -- common/autotest_common.sh@936 -- # '[' -z 3086588 ']' 00:07:09.441 14:11:50 -- common/autotest_common.sh@940 -- # kill -0 3086588 00:07:09.441 14:11:50 -- common/autotest_common.sh@941 -- # uname 00:07:09.441 14:11:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:09.441 14:11:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3086588 00:07:09.441 14:11:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:09.441 14:11:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:09.441 14:11:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3086588' 00:07:09.441 killing process with pid 3086588 00:07:09.441 14:11:50 -- common/autotest_common.sh@955 -- # kill 3086588 00:07:09.441 14:11:50 -- common/autotest_common.sh@960 -- # wait 3086588 00:07:09.700 14:11:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:09.700 14:11:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:09.700 14:11:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:09.701 14:11:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:09.701 14:11:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:09.701 14:11:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.701 14:11:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:09.701 14:11:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.609 14:11:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:11.609 00:07:11.609 real 0m5.453s 00:07:11.609 user 0m6.807s 00:07:11.609 sys 0m1.606s 00:07:11.609 14:11:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:11.609 14:11:53 -- common/autotest_common.sh@10 -- # set +x 00:07:11.609 ************************************ 00:07:11.609 END TEST nvmf_multitarget 00:07:11.609 ************************************ 00:07:11.867 14:11:53 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:11.867 14:11:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:11.867 14:11:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.867 14:11:53 -- common/autotest_common.sh@10 -- # set +x 00:07:11.867 ************************************ 00:07:11.867 START TEST nvmf_rpc 00:07:11.867 ************************************ 00:07:11.867 14:11:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:11.867 * Looking for test storage... 00:07:11.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.867 14:11:53 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.867 14:11:53 -- nvmf/common.sh@7 -- # uname -s 00:07:11.867 14:11:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.867 14:11:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.867 14:11:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.867 14:11:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.867 14:11:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.867 14:11:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.867 14:11:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.867 14:11:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.867 14:11:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.867 14:11:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.867 14:11:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:11.867 14:11:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:11.867 14:11:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.867 14:11:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.867 14:11:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.867 14:11:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.867 14:11:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.867 14:11:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.867 14:11:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.868 14:11:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.868 14:11:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.868 14:11:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.868 14:11:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.868 14:11:53 -- paths/export.sh@5 -- # export PATH 00:07:11.868 14:11:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.868 14:11:53 -- nvmf/common.sh@47 -- # : 0 00:07:11.868 14:11:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:11.868 14:11:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:11.868 14:11:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.868 14:11:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.868 14:11:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.868 14:11:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:11.868 14:11:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:11.868 14:11:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:11.868 14:11:53 -- target/rpc.sh@11 -- # loops=5 00:07:11.868 14:11:53 -- target/rpc.sh@23 -- # nvmftestinit 00:07:11.868 14:11:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:11.868 14:11:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.868 14:11:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:11.868 14:11:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:11.868 14:11:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:11.868 14:11:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.868 14:11:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.868 14:11:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.868 14:11:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:11.868 14:11:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:11.868 14:11:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:11.868 14:11:53 -- common/autotest_common.sh@10 -- # set +x 00:07:13.773 14:11:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:13.773 14:11:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:13.773 14:11:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:13.773 14:11:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:13.773 14:11:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:13.773 14:11:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:13.773 14:11:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:13.773 14:11:54 -- nvmf/common.sh@295 -- # net_devs=() 00:07:13.773 14:11:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:13.773 14:11:54 -- nvmf/common.sh@296 -- # e810=() 00:07:13.773 14:11:54 -- nvmf/common.sh@296 -- # local -ga e810 00:07:13.773 14:11:54 -- nvmf/common.sh@297 -- # x722=() 00:07:13.773 14:11:54 -- nvmf/common.sh@297 -- # local -ga x722 00:07:13.773 14:11:54 -- nvmf/common.sh@298 -- # mlx=() 00:07:13.773 14:11:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:13.773 14:11:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.773 14:11:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.773 14:11:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.773 14:11:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.773 14:11:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.773 14:11:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.773 14:11:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.773 14:11:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.773 14:11:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.773 14:11:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.773 14:11:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.773 14:11:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:13.773 14:11:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:13.773 14:11:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:13.773 14:11:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:13.773 14:11:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:13.773 14:11:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:13.773 14:11:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.773 14:11:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:13.773 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:13.773 14:11:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.773 14:11:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.773 14:11:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.773 14:11:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.773 14:11:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:13.773 14:11:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.773 14:11:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:13.773 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:13.773 14:11:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.773 14:11:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.773 14:11:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.773 14:11:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.773 14:11:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:13.773 14:11:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:13.773 14:11:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:13.773 14:11:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:13.773 14:11:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.773 14:11:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.773 14:11:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:13.773 14:11:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.773 14:11:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:13.773 Found net devices under 0000:08:00.0: cvl_0_0 00:07:13.773 14:11:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.773 14:11:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.773 14:11:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.773 14:11:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:13.773 14:11:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.773 14:11:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:13.773 Found net devices under 0000:08:00.1: cvl_0_1 00:07:13.773 14:11:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.773 14:11:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:13.773 14:11:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:13.773 14:11:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:13.773 14:11:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:13.773 14:11:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:13.773 14:11:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.773 14:11:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.773 14:11:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:13.773 14:11:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:13.773 14:11:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:13.773 14:11:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:13.773 14:11:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:13.773 14:11:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:13.773 14:11:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.773 14:11:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:13.773 14:11:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:13.773 14:11:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:13.773 14:11:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.773 14:11:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.773 14:11:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.773 14:11:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:13.773 14:11:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:13.773 14:11:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:13.773 14:11:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:13.773 14:11:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:13.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.437 ms 00:07:13.773 00:07:13.773 --- 10.0.0.2 ping statistics --- 00:07:13.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.773 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:07:13.773 14:11:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:13.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:07:13.773 00:07:13.773 --- 10.0.0.1 ping statistics --- 00:07:13.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.773 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:07:13.773 14:11:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.774 14:11:55 -- nvmf/common.sh@411 -- # return 0 00:07:13.774 14:11:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:13.774 14:11:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.774 14:11:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:13.774 14:11:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:13.774 14:11:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.774 14:11:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:13.774 14:11:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:13.774 14:11:55 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:13.774 14:11:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:13.774 14:11:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:13.774 14:11:55 -- common/autotest_common.sh@10 -- # set +x 00:07:13.774 14:11:55 -- nvmf/common.sh@470 -- # nvmfpid=3088228 00:07:13.774 14:11:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:13.774 14:11:55 -- nvmf/common.sh@471 -- # waitforlisten 3088228 00:07:13.774 14:11:55 -- common/autotest_common.sh@817 -- # '[' -z 3088228 ']' 00:07:13.774 14:11:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.774 14:11:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:13.774 14:11:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.774 14:11:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:13.774 14:11:55 -- common/autotest_common.sh@10 -- # set +x 00:07:13.774 [2024-04-26 14:11:55.126470] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:07:13.774 [2024-04-26 14:11:55.126558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.774 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.774 [2024-04-26 14:11:55.190879] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.774 [2024-04-26 14:11:55.306356] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.774 [2024-04-26 14:11:55.306414] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.774 [2024-04-26 14:11:55.306430] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.774 [2024-04-26 14:11:55.306443] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.774 [2024-04-26 14:11:55.306455] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.774 [2024-04-26 14:11:55.306535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.774 [2024-04-26 14:11:55.306588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.774 [2024-04-26 14:11:55.306619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.774 [2024-04-26 14:11:55.306622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.033 14:11:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:14.033 14:11:55 -- common/autotest_common.sh@850 -- # return 0 00:07:14.033 14:11:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:14.033 14:11:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:14.033 14:11:55 -- common/autotest_common.sh@10 -- # set +x 00:07:14.033 14:11:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.033 14:11:55 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:14.033 14:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.033 14:11:55 -- common/autotest_common.sh@10 -- # set +x 00:07:14.033 14:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.033 14:11:55 -- target/rpc.sh@26 -- # stats='{ 00:07:14.033 "tick_rate": 2700000000, 00:07:14.033 "poll_groups": [ 00:07:14.033 { 00:07:14.033 "name": "nvmf_tgt_poll_group_0", 00:07:14.033 "admin_qpairs": 0, 00:07:14.033 "io_qpairs": 0, 00:07:14.033 "current_admin_qpairs": 0, 00:07:14.033 "current_io_qpairs": 0, 00:07:14.033 "pending_bdev_io": 0, 00:07:14.033 "completed_nvme_io": 0, 00:07:14.033 "transports": [] 00:07:14.033 }, 00:07:14.033 { 00:07:14.033 "name": "nvmf_tgt_poll_group_1", 00:07:14.033 "admin_qpairs": 0, 00:07:14.033 "io_qpairs": 0, 00:07:14.033 "current_admin_qpairs": 0, 00:07:14.033 "current_io_qpairs": 0, 00:07:14.033 "pending_bdev_io": 0, 00:07:14.033 "completed_nvme_io": 0, 00:07:14.033 "transports": [] 00:07:14.033 }, 00:07:14.033 { 00:07:14.033 "name": "nvmf_tgt_poll_group_2", 00:07:14.033 "admin_qpairs": 0, 00:07:14.033 "io_qpairs": 0, 00:07:14.033 "current_admin_qpairs": 0, 00:07:14.033 "current_io_qpairs": 0, 00:07:14.033 "pending_bdev_io": 0, 00:07:14.033 "completed_nvme_io": 0, 00:07:14.033 "transports": [] 00:07:14.033 }, 00:07:14.033 { 00:07:14.033 "name": "nvmf_tgt_poll_group_3", 00:07:14.033 "admin_qpairs": 0, 00:07:14.033 "io_qpairs": 0, 00:07:14.033 "current_admin_qpairs": 0, 00:07:14.033 "current_io_qpairs": 0, 00:07:14.033 "pending_bdev_io": 0, 00:07:14.033 "completed_nvme_io": 0, 00:07:14.033 "transports": [] 00:07:14.033 } 00:07:14.033 ] 00:07:14.033 }' 00:07:14.033 14:11:55 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:14.033 14:11:55 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:14.033 14:11:55 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:14.033 14:11:55 -- target/rpc.sh@15 -- # wc -l 00:07:14.033 14:11:55 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:14.033 14:11:55 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:14.033 14:11:55 -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:14.033 14:11:55 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:14.033 14:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.033 14:11:55 -- common/autotest_common.sh@10 -- # set +x 00:07:14.033 [2024-04-26 14:11:55.549608] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.033 14:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.033 14:11:55 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:14.033 14:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.033 14:11:55 -- common/autotest_common.sh@10 -- # set +x 00:07:14.033 14:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.033 14:11:55 -- target/rpc.sh@33 -- # stats='{ 00:07:14.033 "tick_rate": 2700000000, 00:07:14.033 "poll_groups": [ 00:07:14.033 { 00:07:14.033 "name": "nvmf_tgt_poll_group_0", 00:07:14.033 "admin_qpairs": 0, 00:07:14.033 "io_qpairs": 0, 00:07:14.033 "current_admin_qpairs": 0, 00:07:14.033 "current_io_qpairs": 0, 00:07:14.033 "pending_bdev_io": 0, 00:07:14.033 "completed_nvme_io": 0, 00:07:14.033 "transports": [ 00:07:14.033 { 00:07:14.033 "trtype": "TCP" 00:07:14.033 } 00:07:14.033 ] 00:07:14.033 }, 00:07:14.033 { 00:07:14.033 "name": "nvmf_tgt_poll_group_1", 00:07:14.033 "admin_qpairs": 0, 00:07:14.033 "io_qpairs": 0, 00:07:14.033 "current_admin_qpairs": 0, 00:07:14.033 "current_io_qpairs": 0, 00:07:14.033 "pending_bdev_io": 0, 00:07:14.033 "completed_nvme_io": 0, 00:07:14.033 "transports": [ 00:07:14.033 { 00:07:14.033 "trtype": "TCP" 00:07:14.033 } 00:07:14.033 ] 00:07:14.033 }, 00:07:14.033 { 00:07:14.033 "name": "nvmf_tgt_poll_group_2", 00:07:14.033 "admin_qpairs": 0, 00:07:14.033 "io_qpairs": 0, 00:07:14.033 "current_admin_qpairs": 0, 00:07:14.033 "current_io_qpairs": 0, 00:07:14.033 "pending_bdev_io": 0, 00:07:14.033 "completed_nvme_io": 0, 00:07:14.033 "transports": [ 00:07:14.033 { 00:07:14.033 "trtype": "TCP" 00:07:14.033 } 00:07:14.033 ] 00:07:14.033 }, 00:07:14.033 { 00:07:14.033 "name": "nvmf_tgt_poll_group_3", 00:07:14.033 "admin_qpairs": 0, 00:07:14.033 "io_qpairs": 0, 00:07:14.033 "current_admin_qpairs": 0, 00:07:14.033 "current_io_qpairs": 0, 00:07:14.033 "pending_bdev_io": 0, 00:07:14.033 "completed_nvme_io": 0, 00:07:14.033 "transports": [ 00:07:14.033 { 00:07:14.033 "trtype": "TCP" 00:07:14.033 } 00:07:14.033 ] 00:07:14.033 } 00:07:14.033 ] 00:07:14.033 }' 00:07:14.033 14:11:55 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:14.033 14:11:55 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:14.033 14:11:55 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:14.033 14:11:55 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:14.292 14:11:55 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:14.292 14:11:55 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:14.292 14:11:55 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:14.292 14:11:55 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:14.292 14:11:55 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:14.292 14:11:55 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:14.292 14:11:55 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:14.292 14:11:55 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:14.292 14:11:55 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:14.292 14:11:55 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:14.292 14:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.292 14:11:55 -- common/autotest_common.sh@10 -- # set +x 00:07:14.292 Malloc1 00:07:14.292 14:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.292 14:11:55 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:14.292 14:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.292 14:11:55 -- common/autotest_common.sh@10 -- # set +x 00:07:14.292 14:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.292 14:11:55 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:14.292 14:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.292 14:11:55 -- common/autotest_common.sh@10 -- # set +x 00:07:14.292 14:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.292 14:11:55 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:14.292 14:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.292 14:11:55 -- common/autotest_common.sh@10 -- # set +x 00:07:14.292 14:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.292 14:11:55 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.292 14:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.292 14:11:55 -- common/autotest_common.sh@10 -- # set +x 00:07:14.292 [2024-04-26 14:11:55.708220] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.292 14:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.292 14:11:55 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:07:14.292 14:11:55 -- common/autotest_common.sh@638 -- # local es=0 00:07:14.292 14:11:55 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:07:14.292 14:11:55 -- common/autotest_common.sh@626 -- # local arg=nvme 00:07:14.292 14:11:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:14.292 14:11:55 -- common/autotest_common.sh@630 -- # type -t nvme 00:07:14.292 14:11:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:14.292 14:11:55 -- common/autotest_common.sh@632 -- # type -P nvme 00:07:14.292 14:11:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:14.292 14:11:55 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:07:14.292 14:11:55 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:07:14.292 14:11:55 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:07:14.292 [2024-04-26 14:11:55.730754] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc' 00:07:14.292 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:14.292 could not add new controller: failed to write to nvme-fabrics device 00:07:14.292 14:11:55 -- common/autotest_common.sh@641 -- # es=1 00:07:14.292 14:11:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:14.292 14:11:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:14.292 14:11:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:14.292 14:11:55 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:14.292 14:11:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.292 14:11:55 -- common/autotest_common.sh@10 -- # set +x 00:07:14.292 14:11:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.292 14:11:55 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:14.858 14:11:56 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:14.858 14:11:56 -- common/autotest_common.sh@1184 -- # local i=0 00:07:14.858 14:11:56 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:14.858 14:11:56 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:14.858 14:11:56 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:16.759 14:11:58 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:16.759 14:11:58 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:16.759 14:11:58 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:16.759 14:11:58 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:16.759 14:11:58 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:16.759 14:11:58 -- common/autotest_common.sh@1194 -- # return 0 00:07:16.759 14:11:58 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:16.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:16.759 14:11:58 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:16.759 14:11:58 -- common/autotest_common.sh@1205 -- # local i=0 00:07:16.759 14:11:58 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:16.759 14:11:58 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:16.759 14:11:58 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:16.759 14:11:58 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:16.759 14:11:58 -- common/autotest_common.sh@1217 -- # return 0 00:07:16.759 14:11:58 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:16.759 14:11:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:16.759 14:11:58 -- common/autotest_common.sh@10 -- # set +x 00:07:17.018 14:11:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.018 14:11:58 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:17.018 14:11:58 -- common/autotest_common.sh@638 -- # local es=0 00:07:17.018 14:11:58 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:17.018 14:11:58 -- common/autotest_common.sh@626 -- # local arg=nvme 00:07:17.018 14:11:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:17.018 14:11:58 -- common/autotest_common.sh@630 -- # type -t nvme 00:07:17.018 14:11:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:17.018 14:11:58 -- common/autotest_common.sh@632 -- # type -P nvme 00:07:17.018 14:11:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:17.018 14:11:58 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:07:17.018 14:11:58 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:07:17.018 14:11:58 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:17.018 [2024-04-26 14:11:58.348419] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc' 00:07:17.018 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:17.018 could not add new controller: failed to write to nvme-fabrics device 00:07:17.018 14:11:58 -- common/autotest_common.sh@641 -- # es=1 00:07:17.018 14:11:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:17.018 14:11:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:17.018 14:11:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:17.018 14:11:58 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:17.018 14:11:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:17.018 14:11:58 -- common/autotest_common.sh@10 -- # set +x 00:07:17.018 14:11:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:17.018 14:11:58 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:17.277 14:11:58 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:17.277 14:11:58 -- common/autotest_common.sh@1184 -- # local i=0 00:07:17.277 14:11:58 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:17.277 14:11:58 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:17.277 14:11:58 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:19.805 14:12:00 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:19.805 14:12:00 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:19.805 14:12:00 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:19.805 14:12:00 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:19.805 14:12:00 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:19.805 14:12:00 -- common/autotest_common.sh@1194 -- # return 0 00:07:19.805 14:12:00 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:19.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:19.805 14:12:00 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:19.805 14:12:00 -- common/autotest_common.sh@1205 -- # local i=0 00:07:19.805 14:12:00 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:19.805 14:12:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:19.805 14:12:00 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:19.805 14:12:00 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:19.805 14:12:00 -- common/autotest_common.sh@1217 -- # return 0 00:07:19.805 14:12:00 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:19.805 14:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.805 14:12:00 -- common/autotest_common.sh@10 -- # set +x 00:07:19.805 14:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.805 14:12:00 -- target/rpc.sh@81 -- # seq 1 5 00:07:19.805 14:12:00 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:19.805 14:12:00 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:19.805 14:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.805 14:12:00 -- common/autotest_common.sh@10 -- # set +x 00:07:19.805 14:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.805 14:12:00 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.805 14:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.805 14:12:00 -- common/autotest_common.sh@10 -- # set +x 00:07:19.805 [2024-04-26 14:12:00.927889] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.805 14:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.805 14:12:00 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:19.805 14:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.805 14:12:00 -- common/autotest_common.sh@10 -- # set +x 00:07:19.805 14:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.805 14:12:00 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:19.805 14:12:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:19.805 14:12:00 -- common/autotest_common.sh@10 -- # set +x 00:07:19.805 14:12:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:19.805 14:12:00 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:20.063 14:12:01 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:20.063 14:12:01 -- common/autotest_common.sh@1184 -- # local i=0 00:07:20.063 14:12:01 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:20.063 14:12:01 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:20.063 14:12:01 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:21.961 14:12:03 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:21.961 14:12:03 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:21.961 14:12:03 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:21.961 14:12:03 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:21.961 14:12:03 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:21.961 14:12:03 -- common/autotest_common.sh@1194 -- # return 0 00:07:21.961 14:12:03 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:21.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.961 14:12:03 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:21.961 14:12:03 -- common/autotest_common.sh@1205 -- # local i=0 00:07:21.961 14:12:03 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:21.961 14:12:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.961 14:12:03 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:21.961 14:12:03 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:21.961 14:12:03 -- common/autotest_common.sh@1217 -- # return 0 00:07:21.961 14:12:03 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.961 14:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:21.961 14:12:03 -- common/autotest_common.sh@10 -- # set +x 00:07:21.961 14:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:21.961 14:12:03 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.961 14:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:21.961 14:12:03 -- common/autotest_common.sh@10 -- # set +x 00:07:21.961 14:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:21.961 14:12:03 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:21.961 14:12:03 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:21.961 14:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:21.961 14:12:03 -- common/autotest_common.sh@10 -- # set +x 00:07:21.961 14:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:21.961 14:12:03 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.961 14:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:21.961 14:12:03 -- common/autotest_common.sh@10 -- # set +x 00:07:21.961 [2024-04-26 14:12:03.521509] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.961 14:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:21.961 14:12:03 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:21.961 14:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:21.961 14:12:03 -- common/autotest_common.sh@10 -- # set +x 00:07:22.219 14:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:22.219 14:12:03 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:22.219 14:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:22.219 14:12:03 -- common/autotest_common.sh@10 -- # set +x 00:07:22.219 14:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:22.219 14:12:03 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:22.477 14:12:04 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:22.477 14:12:04 -- common/autotest_common.sh@1184 -- # local i=0 00:07:22.477 14:12:04 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:22.477 14:12:04 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:22.477 14:12:04 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:25.003 14:12:06 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:25.003 14:12:06 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:25.003 14:12:06 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:25.003 14:12:06 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:25.003 14:12:06 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:25.003 14:12:06 -- common/autotest_common.sh@1194 -- # return 0 00:07:25.003 14:12:06 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:25.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:25.003 14:12:06 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:25.003 14:12:06 -- common/autotest_common.sh@1205 -- # local i=0 00:07:25.003 14:12:06 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:25.003 14:12:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:25.003 14:12:06 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:25.003 14:12:06 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:25.003 14:12:06 -- common/autotest_common.sh@1217 -- # return 0 00:07:25.003 14:12:06 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.003 14:12:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.003 14:12:06 -- common/autotest_common.sh@10 -- # set +x 00:07:25.003 14:12:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.003 14:12:06 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:25.003 14:12:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.003 14:12:06 -- common/autotest_common.sh@10 -- # set +x 00:07:25.003 14:12:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.003 14:12:06 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:25.003 14:12:06 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:25.003 14:12:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.003 14:12:06 -- common/autotest_common.sh@10 -- # set +x 00:07:25.003 14:12:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.003 14:12:06 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:25.003 14:12:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.003 14:12:06 -- common/autotest_common.sh@10 -- # set +x 00:07:25.003 [2024-04-26 14:12:06.119876] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.003 14:12:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.003 14:12:06 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:25.003 14:12:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.003 14:12:06 -- common/autotest_common.sh@10 -- # set +x 00:07:25.003 14:12:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.003 14:12:06 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:25.003 14:12:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.003 14:12:06 -- common/autotest_common.sh@10 -- # set +x 00:07:25.003 14:12:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.003 14:12:06 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:25.261 14:12:06 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:25.261 14:12:06 -- common/autotest_common.sh@1184 -- # local i=0 00:07:25.261 14:12:06 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:25.261 14:12:06 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:25.261 14:12:06 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:27.162 14:12:08 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:27.162 14:12:08 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:27.162 14:12:08 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:27.162 14:12:08 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:27.162 14:12:08 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:27.162 14:12:08 -- common/autotest_common.sh@1194 -- # return 0 00:07:27.162 14:12:08 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:27.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:27.162 14:12:08 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:27.162 14:12:08 -- common/autotest_common.sh@1205 -- # local i=0 00:07:27.162 14:12:08 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:27.162 14:12:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:27.162 14:12:08 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:27.162 14:12:08 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:27.162 14:12:08 -- common/autotest_common.sh@1217 -- # return 0 00:07:27.162 14:12:08 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:27.162 14:12:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.162 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:07:27.162 14:12:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.162 14:12:08 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.162 14:12:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.162 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:07:27.162 14:12:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.162 14:12:08 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:27.162 14:12:08 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:27.162 14:12:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.162 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:07:27.162 14:12:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.162 14:12:08 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.162 14:12:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.162 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:07:27.162 [2024-04-26 14:12:08.705183] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.162 14:12:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.162 14:12:08 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:27.162 14:12:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.162 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:07:27.162 14:12:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.162 14:12:08 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:27.162 14:12:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:27.162 14:12:08 -- common/autotest_common.sh@10 -- # set +x 00:07:27.162 14:12:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:27.162 14:12:08 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:27.728 14:12:09 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:27.728 14:12:09 -- common/autotest_common.sh@1184 -- # local i=0 00:07:27.728 14:12:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:27.728 14:12:09 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:27.728 14:12:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:29.627 14:12:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:29.627 14:12:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:29.627 14:12:11 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:29.885 14:12:11 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:29.885 14:12:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:29.885 14:12:11 -- common/autotest_common.sh@1194 -- # return 0 00:07:29.885 14:12:11 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:29.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:29.885 14:12:11 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:29.885 14:12:11 -- common/autotest_common.sh@1205 -- # local i=0 00:07:29.885 14:12:11 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:29.885 14:12:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.885 14:12:11 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:29.885 14:12:11 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:29.885 14:12:11 -- common/autotest_common.sh@1217 -- # return 0 00:07:29.885 14:12:11 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:29.885 14:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.885 14:12:11 -- common/autotest_common.sh@10 -- # set +x 00:07:29.885 14:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.885 14:12:11 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.885 14:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.885 14:12:11 -- common/autotest_common.sh@10 -- # set +x 00:07:29.885 14:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.885 14:12:11 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:29.885 14:12:11 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:29.885 14:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.885 14:12:11 -- common/autotest_common.sh@10 -- # set +x 00:07:29.885 14:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.885 14:12:11 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.885 14:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.885 14:12:11 -- common/autotest_common.sh@10 -- # set +x 00:07:29.885 [2024-04-26 14:12:11.301017] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.885 14:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.885 14:12:11 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:29.885 14:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.885 14:12:11 -- common/autotest_common.sh@10 -- # set +x 00:07:29.885 14:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.885 14:12:11 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:29.885 14:12:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:29.885 14:12:11 -- common/autotest_common.sh@10 -- # set +x 00:07:29.885 14:12:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:29.885 14:12:11 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:30.142 14:12:11 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:30.142 14:12:11 -- common/autotest_common.sh@1184 -- # local i=0 00:07:30.142 14:12:11 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:30.142 14:12:11 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:30.142 14:12:11 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:32.671 14:12:13 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:32.671 14:12:13 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:32.671 14:12:13 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:32.671 14:12:13 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:32.671 14:12:13 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:32.671 14:12:13 -- common/autotest_common.sh@1194 -- # return 0 00:07:32.671 14:12:13 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:32.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:32.671 14:12:13 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:32.671 14:12:13 -- common/autotest_common.sh@1205 -- # local i=0 00:07:32.671 14:12:13 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:32.671 14:12:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.671 14:12:13 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:32.671 14:12:13 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:32.671 14:12:13 -- common/autotest_common.sh@1217 -- # return 0 00:07:32.671 14:12:13 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:32.671 14:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:13 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:32.671 14:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:13 -- target/rpc.sh@99 -- # seq 1 5 00:07:32.671 14:12:13 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:32.671 14:12:13 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:32.671 14:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:13 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.671 14:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 [2024-04-26 14:12:13.895721] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.671 14:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:13 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:32.671 14:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:13 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:32.671 14:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:13 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.671 14:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:13 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:32.671 14:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:13 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:32.671 14:12:13 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:32.671 14:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:13 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.671 14:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 [2024-04-26 14:12:13.943814] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.671 14:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:13 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:32.671 14:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:13 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:32.671 14:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:13 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.671 14:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:13 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:32.671 14:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:13 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:32.671 14:12:13 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:32.671 14:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:13 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.671 14:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 [2024-04-26 14:12:13.991968] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.671 14:12:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:13 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:32.671 14:12:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:32.671 14:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:14 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.671 14:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:14 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:14 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:32.671 14:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:14 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:32.671 14:12:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:32.671 14:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:14 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.671 14:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:14 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 [2024-04-26 14:12:14.040133] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.671 14:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:32.671 14:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:14 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:32.671 14:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:14 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.671 14:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:14 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:14 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:32.671 14:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:14 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:32.671 14:12:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:32.671 14:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:14 -- common/autotest_common.sh@10 -- # set +x 00:07:32.671 14:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.671 14:12:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.671 14:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.671 14:12:14 -- common/autotest_common.sh@10 -- # set +x 00:07:32.672 [2024-04-26 14:12:14.088294] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.672 14:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.672 14:12:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:32.672 14:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.672 14:12:14 -- common/autotest_common.sh@10 -- # set +x 00:07:32.672 14:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.672 14:12:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:32.672 14:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.672 14:12:14 -- common/autotest_common.sh@10 -- # set +x 00:07:32.672 14:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.672 14:12:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.672 14:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.672 14:12:14 -- common/autotest_common.sh@10 -- # set +x 00:07:32.672 14:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.672 14:12:14 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:32.672 14:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.672 14:12:14 -- common/autotest_common.sh@10 -- # set +x 00:07:32.672 14:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.672 14:12:14 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:32.672 14:12:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:32.672 14:12:14 -- common/autotest_common.sh@10 -- # set +x 00:07:32.672 14:12:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:32.672 14:12:14 -- target/rpc.sh@110 -- # stats='{ 00:07:32.672 "tick_rate": 2700000000, 00:07:32.672 "poll_groups": [ 00:07:32.672 { 00:07:32.672 "name": "nvmf_tgt_poll_group_0", 00:07:32.672 "admin_qpairs": 2, 00:07:32.672 "io_qpairs": 56, 00:07:32.672 "current_admin_qpairs": 0, 00:07:32.672 "current_io_qpairs": 0, 00:07:32.672 "pending_bdev_io": 0, 00:07:32.672 "completed_nvme_io": 156, 00:07:32.672 "transports": [ 00:07:32.672 { 00:07:32.672 "trtype": "TCP" 00:07:32.672 } 00:07:32.672 ] 00:07:32.672 }, 00:07:32.672 { 00:07:32.672 "name": "nvmf_tgt_poll_group_1", 00:07:32.672 "admin_qpairs": 2, 00:07:32.672 "io_qpairs": 56, 00:07:32.672 "current_admin_qpairs": 0, 00:07:32.672 "current_io_qpairs": 0, 00:07:32.672 "pending_bdev_io": 0, 00:07:32.672 "completed_nvme_io": 156, 00:07:32.672 "transports": [ 00:07:32.672 { 00:07:32.672 "trtype": "TCP" 00:07:32.672 } 00:07:32.672 ] 00:07:32.672 }, 00:07:32.672 { 00:07:32.672 "name": "nvmf_tgt_poll_group_2", 00:07:32.672 "admin_qpairs": 1, 00:07:32.672 "io_qpairs": 56, 00:07:32.672 "current_admin_qpairs": 0, 00:07:32.672 "current_io_qpairs": 0, 00:07:32.672 "pending_bdev_io": 0, 00:07:32.672 "completed_nvme_io": 105, 00:07:32.672 "transports": [ 00:07:32.672 { 00:07:32.672 "trtype": "TCP" 00:07:32.672 } 00:07:32.672 ] 00:07:32.672 }, 00:07:32.672 { 00:07:32.672 "name": "nvmf_tgt_poll_group_3", 00:07:32.672 "admin_qpairs": 2, 00:07:32.672 "io_qpairs": 56, 00:07:32.672 "current_admin_qpairs": 0, 00:07:32.672 "current_io_qpairs": 0, 00:07:32.672 "pending_bdev_io": 0, 00:07:32.672 "completed_nvme_io": 157, 00:07:32.672 "transports": [ 00:07:32.672 { 00:07:32.672 "trtype": "TCP" 00:07:32.672 } 00:07:32.672 ] 00:07:32.672 } 00:07:32.672 ] 00:07:32.672 }' 00:07:32.672 14:12:14 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:32.672 14:12:14 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:32.672 14:12:14 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:32.672 14:12:14 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:32.672 14:12:14 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:32.672 14:12:14 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:32.672 14:12:14 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:32.672 14:12:14 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:32.672 14:12:14 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:32.672 14:12:14 -- target/rpc.sh@113 -- # (( 224 > 0 )) 00:07:32.672 14:12:14 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:32.672 14:12:14 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:32.672 14:12:14 -- target/rpc.sh@123 -- # nvmftestfini 00:07:32.672 14:12:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:32.672 14:12:14 -- nvmf/common.sh@117 -- # sync 00:07:32.672 14:12:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:32.672 14:12:14 -- nvmf/common.sh@120 -- # set +e 00:07:32.672 14:12:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:32.672 14:12:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:32.672 rmmod nvme_tcp 00:07:32.930 rmmod nvme_fabrics 00:07:32.930 rmmod nvme_keyring 00:07:32.930 14:12:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:32.930 14:12:14 -- nvmf/common.sh@124 -- # set -e 00:07:32.930 14:12:14 -- nvmf/common.sh@125 -- # return 0 00:07:32.930 14:12:14 -- nvmf/common.sh@478 -- # '[' -n 3088228 ']' 00:07:32.930 14:12:14 -- nvmf/common.sh@479 -- # killprocess 3088228 00:07:32.930 14:12:14 -- common/autotest_common.sh@936 -- # '[' -z 3088228 ']' 00:07:32.930 14:12:14 -- common/autotest_common.sh@940 -- # kill -0 3088228 00:07:32.930 14:12:14 -- common/autotest_common.sh@941 -- # uname 00:07:32.930 14:12:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:32.930 14:12:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3088228 00:07:32.930 14:12:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:32.930 14:12:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:32.930 14:12:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3088228' 00:07:32.930 killing process with pid 3088228 00:07:32.930 14:12:14 -- common/autotest_common.sh@955 -- # kill 3088228 00:07:32.930 14:12:14 -- common/autotest_common.sh@960 -- # wait 3088228 00:07:33.190 14:12:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:33.190 14:12:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:33.190 14:12:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:33.191 14:12:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:33.191 14:12:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:33.191 14:12:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.191 14:12:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.191 14:12:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.098 14:12:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:35.098 00:07:35.098 real 0m23.297s 00:07:35.098 user 1m15.958s 00:07:35.098 sys 0m3.426s 00:07:35.098 14:12:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:35.098 14:12:16 -- common/autotest_common.sh@10 -- # set +x 00:07:35.098 ************************************ 00:07:35.098 END TEST nvmf_rpc 00:07:35.098 ************************************ 00:07:35.098 14:12:16 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:35.098 14:12:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:35.098 14:12:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.098 14:12:16 -- common/autotest_common.sh@10 -- # set +x 00:07:35.357 ************************************ 00:07:35.357 START TEST nvmf_invalid 00:07:35.357 ************************************ 00:07:35.357 14:12:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:35.357 * Looking for test storage... 00:07:35.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.357 14:12:16 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.357 14:12:16 -- nvmf/common.sh@7 -- # uname -s 00:07:35.357 14:12:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.357 14:12:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.357 14:12:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.357 14:12:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.357 14:12:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.357 14:12:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.357 14:12:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.357 14:12:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.357 14:12:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.357 14:12:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.357 14:12:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:35.357 14:12:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:35.357 14:12:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.357 14:12:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.357 14:12:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.357 14:12:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.357 14:12:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.357 14:12:16 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.357 14:12:16 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.357 14:12:16 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.357 14:12:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.357 14:12:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.357 14:12:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.357 14:12:16 -- paths/export.sh@5 -- # export PATH 00:07:35.357 14:12:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.357 14:12:16 -- nvmf/common.sh@47 -- # : 0 00:07:35.357 14:12:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:35.357 14:12:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:35.357 14:12:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.357 14:12:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.357 14:12:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.357 14:12:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:35.357 14:12:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:35.357 14:12:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:35.357 14:12:16 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:35.357 14:12:16 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.357 14:12:16 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:35.357 14:12:16 -- target/invalid.sh@14 -- # target=foobar 00:07:35.357 14:12:16 -- target/invalid.sh@16 -- # RANDOM=0 00:07:35.357 14:12:16 -- target/invalid.sh@34 -- # nvmftestinit 00:07:35.357 14:12:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:35.357 14:12:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.357 14:12:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:35.357 14:12:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:35.357 14:12:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:35.357 14:12:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.357 14:12:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:35.357 14:12:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.357 14:12:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:35.357 14:12:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:35.357 14:12:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:35.357 14:12:16 -- common/autotest_common.sh@10 -- # set +x 00:07:37.268 14:12:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:37.268 14:12:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:37.268 14:12:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:37.268 14:12:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:37.268 14:12:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:37.268 14:12:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:37.268 14:12:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:37.268 14:12:18 -- nvmf/common.sh@295 -- # net_devs=() 00:07:37.268 14:12:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:37.268 14:12:18 -- nvmf/common.sh@296 -- # e810=() 00:07:37.268 14:12:18 -- nvmf/common.sh@296 -- # local -ga e810 00:07:37.268 14:12:18 -- nvmf/common.sh@297 -- # x722=() 00:07:37.268 14:12:18 -- nvmf/common.sh@297 -- # local -ga x722 00:07:37.268 14:12:18 -- nvmf/common.sh@298 -- # mlx=() 00:07:37.268 14:12:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:37.268 14:12:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.268 14:12:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.268 14:12:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.268 14:12:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.268 14:12:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.268 14:12:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.268 14:12:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.268 14:12:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.268 14:12:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.268 14:12:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.268 14:12:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.268 14:12:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:37.268 14:12:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:37.268 14:12:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:37.268 14:12:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:37.268 14:12:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:37.268 14:12:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:37.268 14:12:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.268 14:12:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:37.268 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:37.268 14:12:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.268 14:12:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.268 14:12:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.268 14:12:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.268 14:12:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.268 14:12:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.268 14:12:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:37.268 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:37.268 14:12:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:37.268 14:12:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:37.268 14:12:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.268 14:12:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.268 14:12:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:37.268 14:12:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:37.268 14:12:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:37.268 14:12:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:37.268 14:12:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.268 14:12:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.268 14:12:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:37.268 14:12:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.268 14:12:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:37.268 Found net devices under 0000:08:00.0: cvl_0_0 00:07:37.268 14:12:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.268 14:12:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.268 14:12:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.268 14:12:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:37.268 14:12:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.268 14:12:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:37.268 Found net devices under 0000:08:00.1: cvl_0_1 00:07:37.269 14:12:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.269 14:12:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:37.269 14:12:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:37.269 14:12:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:37.269 14:12:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:37.269 14:12:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:37.269 14:12:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.269 14:12:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.269 14:12:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.269 14:12:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:37.269 14:12:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.269 14:12:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.269 14:12:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:37.269 14:12:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.269 14:12:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.269 14:12:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:37.269 14:12:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:37.269 14:12:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.269 14:12:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.269 14:12:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.269 14:12:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.269 14:12:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:37.269 14:12:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.269 14:12:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.269 14:12:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.269 14:12:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:37.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:07:37.269 00:07:37.269 --- 10.0.0.2 ping statistics --- 00:07:37.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.269 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:07:37.269 14:12:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:07:37.269 00:07:37.269 --- 10.0.0.1 ping statistics --- 00:07:37.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.269 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:07:37.269 14:12:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.269 14:12:18 -- nvmf/common.sh@411 -- # return 0 00:07:37.269 14:12:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:37.269 14:12:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.269 14:12:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:37.269 14:12:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:37.269 14:12:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.269 14:12:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:37.269 14:12:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:37.269 14:12:18 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:37.269 14:12:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:37.269 14:12:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:37.269 14:12:18 -- common/autotest_common.sh@10 -- # set +x 00:07:37.269 14:12:18 -- nvmf/common.sh@470 -- # nvmfpid=3092242 00:07:37.269 14:12:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:37.269 14:12:18 -- nvmf/common.sh@471 -- # waitforlisten 3092242 00:07:37.269 14:12:18 -- common/autotest_common.sh@817 -- # '[' -z 3092242 ']' 00:07:37.269 14:12:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.269 14:12:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:37.269 14:12:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.269 14:12:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:37.269 14:12:18 -- common/autotest_common.sh@10 -- # set +x 00:07:37.269 [2024-04-26 14:12:18.613419] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:07:37.269 [2024-04-26 14:12:18.613501] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.269 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.269 [2024-04-26 14:12:18.677137] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.269 [2024-04-26 14:12:18.793507] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.269 [2024-04-26 14:12:18.793563] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.269 [2024-04-26 14:12:18.793579] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.269 [2024-04-26 14:12:18.793593] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.269 [2024-04-26 14:12:18.793605] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.269 [2024-04-26 14:12:18.793688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.269 [2024-04-26 14:12:18.793778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.269 [2024-04-26 14:12:18.793874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.269 [2024-04-26 14:12:18.793879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.527 14:12:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:37.527 14:12:18 -- common/autotest_common.sh@850 -- # return 0 00:07:37.527 14:12:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:37.527 14:12:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:37.527 14:12:18 -- common/autotest_common.sh@10 -- # set +x 00:07:37.527 14:12:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.527 14:12:18 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:37.527 14:12:18 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode19940 00:07:37.785 [2024-04-26 14:12:19.214035] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:37.785 14:12:19 -- target/invalid.sh@40 -- # out='request: 00:07:37.785 { 00:07:37.785 "nqn": "nqn.2016-06.io.spdk:cnode19940", 00:07:37.785 "tgt_name": "foobar", 00:07:37.785 "method": "nvmf_create_subsystem", 00:07:37.785 "req_id": 1 00:07:37.785 } 00:07:37.785 Got JSON-RPC error response 00:07:37.785 response: 00:07:37.785 { 00:07:37.785 "code": -32603, 00:07:37.785 "message": "Unable to find target foobar" 00:07:37.785 }' 00:07:37.785 14:12:19 -- target/invalid.sh@41 -- # [[ request: 00:07:37.785 { 00:07:37.785 "nqn": "nqn.2016-06.io.spdk:cnode19940", 00:07:37.785 "tgt_name": "foobar", 00:07:37.785 "method": "nvmf_create_subsystem", 00:07:37.785 "req_id": 1 00:07:37.785 } 00:07:37.785 Got JSON-RPC error response 00:07:37.785 response: 00:07:37.785 { 00:07:37.785 "code": -32603, 00:07:37.785 "message": "Unable to find target foobar" 00:07:37.785 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:37.785 14:12:19 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:37.785 14:12:19 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25597 00:07:38.043 [2024-04-26 14:12:19.511049] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25597: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:38.043 14:12:19 -- target/invalid.sh@45 -- # out='request: 00:07:38.043 { 00:07:38.043 "nqn": "nqn.2016-06.io.spdk:cnode25597", 00:07:38.043 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:38.043 "method": "nvmf_create_subsystem", 00:07:38.043 "req_id": 1 00:07:38.043 } 00:07:38.043 Got JSON-RPC error response 00:07:38.043 response: 00:07:38.043 { 00:07:38.043 "code": -32602, 00:07:38.043 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:38.043 }' 00:07:38.043 14:12:19 -- target/invalid.sh@46 -- # [[ request: 00:07:38.043 { 00:07:38.043 "nqn": "nqn.2016-06.io.spdk:cnode25597", 00:07:38.043 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:38.043 "method": "nvmf_create_subsystem", 00:07:38.043 "req_id": 1 00:07:38.043 } 00:07:38.043 Got JSON-RPC error response 00:07:38.043 response: 00:07:38.043 { 00:07:38.043 "code": -32602, 00:07:38.043 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:38.043 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:38.043 14:12:19 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:38.043 14:12:19 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23971 00:07:38.301 [2024-04-26 14:12:19.808073] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23971: invalid model number 'SPDK_Controller' 00:07:38.301 14:12:19 -- target/invalid.sh@50 -- # out='request: 00:07:38.301 { 00:07:38.301 "nqn": "nqn.2016-06.io.spdk:cnode23971", 00:07:38.301 "model_number": "SPDK_Controller\u001f", 00:07:38.301 "method": "nvmf_create_subsystem", 00:07:38.301 "req_id": 1 00:07:38.301 } 00:07:38.301 Got JSON-RPC error response 00:07:38.301 response: 00:07:38.301 { 00:07:38.301 "code": -32602, 00:07:38.301 "message": "Invalid MN SPDK_Controller\u001f" 00:07:38.301 }' 00:07:38.301 14:12:19 -- target/invalid.sh@51 -- # [[ request: 00:07:38.301 { 00:07:38.301 "nqn": "nqn.2016-06.io.spdk:cnode23971", 00:07:38.301 "model_number": "SPDK_Controller\u001f", 00:07:38.301 "method": "nvmf_create_subsystem", 00:07:38.301 "req_id": 1 00:07:38.301 } 00:07:38.301 Got JSON-RPC error response 00:07:38.301 response: 00:07:38.301 { 00:07:38.301 "code": -32602, 00:07:38.301 "message": "Invalid MN SPDK_Controller\u001f" 00:07:38.301 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:38.301 14:12:19 -- target/invalid.sh@54 -- # gen_random_s 21 00:07:38.301 14:12:19 -- target/invalid.sh@19 -- # local length=21 ll 00:07:38.301 14:12:19 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:38.301 14:12:19 -- target/invalid.sh@21 -- # local chars 00:07:38.301 14:12:19 -- target/invalid.sh@22 -- # local string 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # printf %x 71 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # string+=G 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # printf %x 56 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # string+=8 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # printf %x 80 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x50' 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # string+=P 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # printf %x 76 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # string+=L 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # printf %x 112 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # string+=p 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # printf %x 39 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x27' 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # string+=\' 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # printf %x 51 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # string+=3 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # printf %x 56 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # string+=8 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # printf %x 103 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x67' 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # string+=g 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # printf %x 111 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:07:38.301 14:12:19 -- target/invalid.sh@25 -- # string+=o 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.301 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # printf %x 54 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # string+=6 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # printf %x 54 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # string+=6 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # printf %x 78 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # string+=N 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # printf %x 41 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x29' 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # string+=')' 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # printf %x 117 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # string+=u 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # printf %x 106 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # string+=j 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # printf %x 60 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # string+='<' 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # printf %x 120 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # string+=x 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # printf %x 120 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # string+=x 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # printf %x 57 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # string+=9 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # printf %x 92 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:07:38.560 14:12:19 -- target/invalid.sh@25 -- # string+='\' 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:07:38.560 14:12:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:07:38.560 14:12:19 -- target/invalid.sh@28 -- # [[ G == \- ]] 00:07:38.560 14:12:19 -- target/invalid.sh@31 -- # echo 'G8PLp'\''38go66N)uj /dev/null' 00:07:42.018 14:12:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.929 14:12:25 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:43.929 00:07:43.929 real 0m8.738s 00:07:43.929 user 0m22.397s 00:07:43.929 sys 0m2.130s 00:07:43.929 14:12:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:43.929 14:12:25 -- common/autotest_common.sh@10 -- # set +x 00:07:43.929 ************************************ 00:07:43.929 END TEST nvmf_invalid 00:07:43.929 ************************************ 00:07:43.929 14:12:25 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:43.929 14:12:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:43.929 14:12:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.929 14:12:25 -- common/autotest_common.sh@10 -- # set +x 00:07:44.188 ************************************ 00:07:44.188 START TEST nvmf_abort 00:07:44.188 ************************************ 00:07:44.188 14:12:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:44.188 * Looking for test storage... 00:07:44.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:44.188 14:12:25 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.188 14:12:25 -- nvmf/common.sh@7 -- # uname -s 00:07:44.188 14:12:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.188 14:12:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.188 14:12:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.188 14:12:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.188 14:12:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.188 14:12:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.188 14:12:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.188 14:12:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.188 14:12:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.188 14:12:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.188 14:12:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:44.188 14:12:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:44.188 14:12:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.188 14:12:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.188 14:12:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.188 14:12:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.188 14:12:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:44.188 14:12:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.188 14:12:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.188 14:12:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.188 14:12:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.188 14:12:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.188 14:12:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.188 14:12:25 -- paths/export.sh@5 -- # export PATH 00:07:44.188 14:12:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.188 14:12:25 -- nvmf/common.sh@47 -- # : 0 00:07:44.188 14:12:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:44.188 14:12:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:44.188 14:12:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.188 14:12:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.188 14:12:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.188 14:12:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:44.188 14:12:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:44.189 14:12:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:44.189 14:12:25 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:44.189 14:12:25 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:44.189 14:12:25 -- target/abort.sh@14 -- # nvmftestinit 00:07:44.189 14:12:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:44.189 14:12:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.189 14:12:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:44.189 14:12:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:44.189 14:12:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:44.189 14:12:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.189 14:12:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:44.189 14:12:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.189 14:12:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:44.189 14:12:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:44.189 14:12:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:44.189 14:12:25 -- common/autotest_common.sh@10 -- # set +x 00:07:46.099 14:12:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:46.099 14:12:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:46.099 14:12:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:46.099 14:12:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:46.099 14:12:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:46.099 14:12:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:46.099 14:12:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:46.099 14:12:27 -- nvmf/common.sh@295 -- # net_devs=() 00:07:46.099 14:12:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:46.099 14:12:27 -- nvmf/common.sh@296 -- # e810=() 00:07:46.099 14:12:27 -- nvmf/common.sh@296 -- # local -ga e810 00:07:46.099 14:12:27 -- nvmf/common.sh@297 -- # x722=() 00:07:46.099 14:12:27 -- nvmf/common.sh@297 -- # local -ga x722 00:07:46.099 14:12:27 -- nvmf/common.sh@298 -- # mlx=() 00:07:46.099 14:12:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:46.099 14:12:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:46.099 14:12:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:46.099 14:12:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:46.099 14:12:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:46.099 14:12:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:46.099 14:12:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:46.099 14:12:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:46.099 14:12:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:46.099 14:12:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:46.099 14:12:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:46.099 14:12:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:46.099 14:12:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:46.099 14:12:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:46.099 14:12:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:46.099 14:12:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:46.099 14:12:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:46.099 14:12:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:46.099 14:12:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:46.099 14:12:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:46.099 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:46.099 14:12:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:46.099 14:12:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:46.099 14:12:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.099 14:12:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.099 14:12:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:46.099 14:12:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:46.099 14:12:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:46.099 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:46.099 14:12:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:46.099 14:12:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:46.099 14:12:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.099 14:12:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.099 14:12:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:46.099 14:12:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:46.099 14:12:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:46.099 14:12:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:46.099 14:12:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:46.099 14:12:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.100 14:12:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:46.100 14:12:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.100 14:12:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:46.100 Found net devices under 0000:08:00.0: cvl_0_0 00:07:46.100 14:12:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.100 14:12:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:46.100 14:12:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.100 14:12:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:46.100 14:12:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.100 14:12:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:46.100 Found net devices under 0000:08:00.1: cvl_0_1 00:07:46.100 14:12:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.100 14:12:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:46.100 14:12:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:46.100 14:12:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:46.100 14:12:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:46.100 14:12:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:46.100 14:12:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.100 14:12:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:46.100 14:12:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:46.100 14:12:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:46.100 14:12:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:46.100 14:12:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:46.100 14:12:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:46.100 14:12:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:46.100 14:12:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.100 14:12:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:46.100 14:12:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:46.100 14:12:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:46.100 14:12:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:46.100 14:12:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:46.100 14:12:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:46.100 14:12:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:46.100 14:12:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:46.100 14:12:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:46.100 14:12:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:46.100 14:12:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:46.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:07:46.100 00:07:46.100 --- 10.0.0.2 ping statistics --- 00:07:46.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.100 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:07:46.100 14:12:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:46.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:07:46.100 00:07:46.100 --- 10.0.0.1 ping statistics --- 00:07:46.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.100 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:07:46.100 14:12:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.100 14:12:27 -- nvmf/common.sh@411 -- # return 0 00:07:46.100 14:12:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:46.100 14:12:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.100 14:12:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:46.100 14:12:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:46.100 14:12:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.100 14:12:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:46.100 14:12:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:46.100 14:12:27 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:46.100 14:12:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:46.100 14:12:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:46.100 14:12:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.100 14:12:27 -- nvmf/common.sh@470 -- # nvmfpid=3094336 00:07:46.100 14:12:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:46.100 14:12:27 -- nvmf/common.sh@471 -- # waitforlisten 3094336 00:07:46.100 14:12:27 -- common/autotest_common.sh@817 -- # '[' -z 3094336 ']' 00:07:46.100 14:12:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.100 14:12:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:46.100 14:12:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.100 14:12:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:46.100 14:12:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.100 [2024-04-26 14:12:27.407243] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:07:46.100 [2024-04-26 14:12:27.407330] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.100 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.100 [2024-04-26 14:12:27.472224] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:46.100 [2024-04-26 14:12:27.587002] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.100 [2024-04-26 14:12:27.587067] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.100 [2024-04-26 14:12:27.587083] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.100 [2024-04-26 14:12:27.587097] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.100 [2024-04-26 14:12:27.587110] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.100 [2024-04-26 14:12:27.587194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.100 [2024-04-26 14:12:27.587255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.100 [2024-04-26 14:12:27.587259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.359 14:12:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:46.359 14:12:27 -- common/autotest_common.sh@850 -- # return 0 00:07:46.359 14:12:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:46.359 14:12:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:46.359 14:12:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.359 14:12:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.359 14:12:27 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:46.359 14:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:46.359 14:12:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.359 [2024-04-26 14:12:27.723982] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.359 14:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:46.359 14:12:27 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:46.359 14:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:46.359 14:12:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.359 Malloc0 00:07:46.359 14:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:46.359 14:12:27 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:46.359 14:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:46.359 14:12:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.359 Delay0 00:07:46.359 14:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:46.359 14:12:27 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:46.359 14:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:46.359 14:12:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.359 14:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:46.359 14:12:27 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:46.359 14:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:46.359 14:12:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.359 14:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:46.359 14:12:27 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:46.359 14:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:46.359 14:12:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.359 [2024-04-26 14:12:27.796012] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.359 14:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:46.359 14:12:27 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:46.359 14:12:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:46.359 14:12:27 -- common/autotest_common.sh@10 -- # set +x 00:07:46.359 14:12:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:46.359 14:12:27 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:46.359 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.618 [2024-04-26 14:12:27.942758] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:48.519 Initializing NVMe Controllers 00:07:48.519 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:48.519 controller IO queue size 128 less than required 00:07:48.519 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:48.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:48.519 Initialization complete. Launching workers. 00:07:48.519 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 26631 00:07:48.519 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 26692, failed to submit 62 00:07:48.519 success 26635, unsuccess 57, failed 0 00:07:48.519 14:12:30 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:48.519 14:12:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:48.519 14:12:30 -- common/autotest_common.sh@10 -- # set +x 00:07:48.519 14:12:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:48.519 14:12:30 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:48.519 14:12:30 -- target/abort.sh@38 -- # nvmftestfini 00:07:48.519 14:12:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:48.519 14:12:30 -- nvmf/common.sh@117 -- # sync 00:07:48.519 14:12:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:48.519 14:12:30 -- nvmf/common.sh@120 -- # set +e 00:07:48.519 14:12:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:48.519 14:12:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:48.519 rmmod nvme_tcp 00:07:48.519 rmmod nvme_fabrics 00:07:48.519 rmmod nvme_keyring 00:07:48.519 14:12:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:48.519 14:12:30 -- nvmf/common.sh@124 -- # set -e 00:07:48.519 14:12:30 -- nvmf/common.sh@125 -- # return 0 00:07:48.519 14:12:30 -- nvmf/common.sh@478 -- # '[' -n 3094336 ']' 00:07:48.519 14:12:30 -- nvmf/common.sh@479 -- # killprocess 3094336 00:07:48.519 14:12:30 -- common/autotest_common.sh@936 -- # '[' -z 3094336 ']' 00:07:48.519 14:12:30 -- common/autotest_common.sh@940 -- # kill -0 3094336 00:07:48.519 14:12:30 -- common/autotest_common.sh@941 -- # uname 00:07:48.519 14:12:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:48.781 14:12:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3094336 00:07:48.781 14:12:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:07:48.781 14:12:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:07:48.781 14:12:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3094336' 00:07:48.781 killing process with pid 3094336 00:07:48.781 14:12:30 -- common/autotest_common.sh@955 -- # kill 3094336 00:07:48.781 14:12:30 -- common/autotest_common.sh@960 -- # wait 3094336 00:07:48.781 14:12:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:48.781 14:12:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:48.781 14:12:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:48.781 14:12:30 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:48.781 14:12:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:48.781 14:12:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.781 14:12:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.781 14:12:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.317 14:12:32 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:51.317 00:07:51.317 real 0m6.797s 00:07:51.317 user 0m10.457s 00:07:51.317 sys 0m2.094s 00:07:51.317 14:12:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:51.317 14:12:32 -- common/autotest_common.sh@10 -- # set +x 00:07:51.317 ************************************ 00:07:51.317 END TEST nvmf_abort 00:07:51.317 ************************************ 00:07:51.317 14:12:32 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:51.317 14:12:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:51.317 14:12:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.317 14:12:32 -- common/autotest_common.sh@10 -- # set +x 00:07:51.317 ************************************ 00:07:51.317 START TEST nvmf_ns_hotplug_stress 00:07:51.317 ************************************ 00:07:51.317 14:12:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:51.317 * Looking for test storage... 00:07:51.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.317 14:12:32 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.317 14:12:32 -- nvmf/common.sh@7 -- # uname -s 00:07:51.317 14:12:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.317 14:12:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.317 14:12:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.317 14:12:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.317 14:12:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.317 14:12:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.317 14:12:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.317 14:12:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.317 14:12:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.317 14:12:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.317 14:12:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:51.317 14:12:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:51.317 14:12:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.317 14:12:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.317 14:12:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.317 14:12:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.317 14:12:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.317 14:12:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.317 14:12:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.317 14:12:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.317 14:12:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.317 14:12:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.317 14:12:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.317 14:12:32 -- paths/export.sh@5 -- # export PATH 00:07:51.317 14:12:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.317 14:12:32 -- nvmf/common.sh@47 -- # : 0 00:07:51.317 14:12:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:51.317 14:12:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:51.317 14:12:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.317 14:12:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.317 14:12:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.317 14:12:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:51.317 14:12:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:51.317 14:12:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:51.317 14:12:32 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.317 14:12:32 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:07:51.317 14:12:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:51.317 14:12:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.317 14:12:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:51.317 14:12:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:51.317 14:12:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:51.317 14:12:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.317 14:12:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.317 14:12:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.317 14:12:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:51.317 14:12:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:51.317 14:12:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:51.317 14:12:32 -- common/autotest_common.sh@10 -- # set +x 00:07:52.695 14:12:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:52.695 14:12:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:52.695 14:12:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:52.695 14:12:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:52.695 14:12:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:52.695 14:12:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:52.695 14:12:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:52.695 14:12:34 -- nvmf/common.sh@295 -- # net_devs=() 00:07:52.695 14:12:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:52.695 14:12:34 -- nvmf/common.sh@296 -- # e810=() 00:07:52.695 14:12:34 -- nvmf/common.sh@296 -- # local -ga e810 00:07:52.695 14:12:34 -- nvmf/common.sh@297 -- # x722=() 00:07:52.695 14:12:34 -- nvmf/common.sh@297 -- # local -ga x722 00:07:52.695 14:12:34 -- nvmf/common.sh@298 -- # mlx=() 00:07:52.695 14:12:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:52.695 14:12:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.695 14:12:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.695 14:12:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.695 14:12:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.695 14:12:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.695 14:12:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.695 14:12:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.695 14:12:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.695 14:12:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.695 14:12:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.695 14:12:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.695 14:12:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:52.695 14:12:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:52.695 14:12:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:52.695 14:12:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:52.695 14:12:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:52.695 14:12:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:52.695 14:12:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.695 14:12:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:52.695 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:52.695 14:12:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:52.695 14:12:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:52.695 14:12:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.695 14:12:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.695 14:12:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:52.695 14:12:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.695 14:12:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:52.695 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:52.695 14:12:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:52.695 14:12:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:52.695 14:12:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.695 14:12:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.695 14:12:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:52.695 14:12:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:52.695 14:12:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:52.695 14:12:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:52.696 14:12:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.696 14:12:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.696 14:12:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:52.696 14:12:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.696 14:12:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:52.696 Found net devices under 0000:08:00.0: cvl_0_0 00:07:52.696 14:12:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.696 14:12:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.696 14:12:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.696 14:12:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:52.696 14:12:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.696 14:12:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:52.696 Found net devices under 0000:08:00.1: cvl_0_1 00:07:52.696 14:12:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.696 14:12:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:52.696 14:12:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:52.696 14:12:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:52.696 14:12:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:52.696 14:12:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:52.696 14:12:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.696 14:12:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.696 14:12:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.696 14:12:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:52.696 14:12:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.696 14:12:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.696 14:12:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:52.696 14:12:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.696 14:12:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.696 14:12:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:52.696 14:12:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:52.696 14:12:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.696 14:12:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.953 14:12:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.953 14:12:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.953 14:12:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:52.953 14:12:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.953 14:12:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.953 14:12:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.953 14:12:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:52.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:07:52.953 00:07:52.953 --- 10.0.0.2 ping statistics --- 00:07:52.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.953 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:07:52.953 14:12:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:07:52.953 00:07:52.953 --- 10.0.0.1 ping statistics --- 00:07:52.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.953 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:07:52.953 14:12:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.953 14:12:34 -- nvmf/common.sh@411 -- # return 0 00:07:52.953 14:12:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:52.953 14:12:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.953 14:12:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:52.953 14:12:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:52.953 14:12:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.953 14:12:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:52.953 14:12:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:52.953 14:12:34 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:07:52.953 14:12:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:52.953 14:12:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:52.953 14:12:34 -- common/autotest_common.sh@10 -- # set +x 00:07:52.953 14:12:34 -- nvmf/common.sh@470 -- # nvmfpid=3096063 00:07:52.953 14:12:34 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:52.953 14:12:34 -- nvmf/common.sh@471 -- # waitforlisten 3096063 00:07:52.953 14:12:34 -- common/autotest_common.sh@817 -- # '[' -z 3096063 ']' 00:07:52.953 14:12:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.953 14:12:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:52.953 14:12:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.953 14:12:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:52.953 14:12:34 -- common/autotest_common.sh@10 -- # set +x 00:07:52.953 [2024-04-26 14:12:34.422778] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:07:52.953 [2024-04-26 14:12:34.422864] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.953 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.953 [2024-04-26 14:12:34.486515] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.210 [2024-04-26 14:12:34.601517] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.210 [2024-04-26 14:12:34.601581] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.210 [2024-04-26 14:12:34.601597] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.210 [2024-04-26 14:12:34.601611] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.210 [2024-04-26 14:12:34.601623] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.210 [2024-04-26 14:12:34.601725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.210 [2024-04-26 14:12:34.601811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.210 [2024-04-26 14:12:34.601815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.210 14:12:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:53.210 14:12:34 -- common/autotest_common.sh@850 -- # return 0 00:07:53.210 14:12:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:53.210 14:12:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:53.210 14:12:34 -- common/autotest_common.sh@10 -- # set +x 00:07:53.210 14:12:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.210 14:12:34 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:07:53.210 14:12:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:53.467 [2024-04-26 14:12:35.001662] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.467 14:12:35 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:54.032 14:12:35 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:54.032 [2024-04-26 14:12:35.589052] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:54.290 14:12:35 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:54.548 14:12:35 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:54.806 Malloc0 00:07:54.806 14:12:36 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:55.064 Delay0 00:07:55.064 14:12:36 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.322 14:12:36 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:55.580 NULL1 00:07:55.580 14:12:37 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:55.837 14:12:37 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=3096387 00:07:55.837 14:12:37 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:55.837 14:12:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:07:55.837 14:12:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.095 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.029 Read completed with error (sct=0, sc=11) 00:07:57.029 14:12:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.286 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.544 14:12:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:07:57.544 14:12:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:57.802 true 00:07:57.802 14:12:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:07:57.802 14:12:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.368 14:12:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.933 14:12:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:07:58.933 14:12:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:58.933 true 00:07:58.933 14:12:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:07:58.933 14:12:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.499 14:12:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.757 14:12:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:07:59.757 14:12:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:00.014 true 00:08:00.014 14:12:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:00.014 14:12:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.272 14:12:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.529 14:12:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:08:00.530 14:12:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:00.787 true 00:08:00.787 14:12:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:00.787 14:12:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.720 14:12:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.720 14:12:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:08:01.720 14:12:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:01.977 true 00:08:01.977 14:12:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:01.977 14:12:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.234 14:12:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.491 14:12:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:08:02.491 14:12:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:02.749 true 00:08:02.749 14:12:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:02.750 14:12:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.682 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.682 14:12:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.682 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.941 14:12:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:08:03.941 14:12:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:04.199 true 00:08:04.199 14:12:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:04.199 14:12:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.457 14:12:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.715 14:12:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:08:04.715 14:12:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:04.971 true 00:08:04.971 14:12:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:04.971 14:12:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.904 14:12:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.904 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.162 14:12:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:08:06.162 14:12:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:06.418 true 00:08:06.418 14:12:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:06.418 14:12:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.674 14:12:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.931 14:12:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:08:06.931 14:12:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:07.189 true 00:08:07.189 14:12:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:07.189 14:12:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.446 14:12:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.704 14:12:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:08:07.704 14:12:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:07.704 true 00:08:07.704 14:12:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:07.704 14:12:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.088 14:12:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.088 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.088 14:12:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:08:09.088 14:12:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:09.346 true 00:08:09.346 14:12:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:09.346 14:12:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.279 14:12:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.536 14:12:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:08:10.536 14:12:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:10.792 true 00:08:10.792 14:12:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:10.792 14:12:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.049 14:12:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.306 14:12:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:08:11.306 14:12:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:11.562 true 00:08:11.562 14:12:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:11.562 14:12:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.820 14:12:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.383 14:12:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:08:12.383 14:12:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:12.383 true 00:08:12.383 14:12:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:12.383 14:12:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.640 14:12:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.897 14:12:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:08:12.897 14:12:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:13.202 true 00:08:13.202 14:12:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:13.202 14:12:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.161 14:12:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.161 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.418 14:12:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:08:14.418 14:12:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:14.680 true 00:08:14.680 14:12:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:14.680 14:12:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.615 14:12:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.872 14:12:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:08:15.872 14:12:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:16.130 true 00:08:16.130 14:12:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:16.130 14:12:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.388 14:12:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.646 14:12:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:08:16.646 14:12:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:16.903 true 00:08:16.903 14:12:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:16.903 14:12:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.160 14:12:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.418 14:12:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:08:17.419 14:12:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:17.676 true 00:08:17.676 14:12:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:17.676 14:12:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.609 14:13:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.866 14:13:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:08:18.866 14:13:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:19.124 true 00:08:19.124 14:13:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:19.124 14:13:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.381 14:13:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.947 14:13:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:08:19.947 14:13:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:19.947 true 00:08:20.204 14:13:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:20.204 14:13:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.462 14:13:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.462 14:13:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:08:20.462 14:13:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:20.720 true 00:08:20.720 14:13:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:20.720 14:13:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:21.653 14:13:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.910 14:13:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:08:21.910 14:13:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:22.166 true 00:08:22.166 14:13:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:22.166 14:13:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.423 14:13:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.680 14:13:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:08:22.680 14:13:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:22.936 true 00:08:22.936 14:13:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:22.936 14:13:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.867 14:13:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.125 14:13:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:08:24.125 14:13:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:24.382 true 00:08:24.382 14:13:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:24.382 14:13:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.640 14:13:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.897 14:13:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:08:24.897 14:13:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:25.154 true 00:08:25.154 14:13:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:25.154 14:13:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.719 14:13:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.977 14:13:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:08:25.977 14:13:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:26.235 true 00:08:26.235 14:13:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:26.235 14:13:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.801 Initializing NVMe Controllers 00:08:26.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:26.801 Controller IO queue size 128, less than required. 00:08:26.801 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:26.801 Controller IO queue size 128, less than required. 00:08:26.801 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:26.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:26.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:26.801 Initialization complete. Launching workers. 00:08:26.801 ======================================================== 00:08:26.801 Latency(us) 00:08:26.801 Device Information : IOPS MiB/s Average min max 00:08:26.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 853.07 0.42 74078.49 4051.69 1017805.75 00:08:26.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8380.23 4.09 15273.99 2148.31 530987.05 00:08:26.801 ======================================================== 00:08:26.801 Total : 9233.30 4.51 20706.95 2148.31 1017805.75 00:08:26.801 00:08:26.801 14:13:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.367 14:13:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:08:27.367 14:13:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:27.367 true 00:08:27.367 14:13:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 3096387 00:08:27.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (3096387) - No such process 00:08:27.367 14:13:08 -- target/ns_hotplug_stress.sh@44 -- # wait 3096387 00:08:27.367 14:13:08 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:08:27.367 14:13:08 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:08:27.367 14:13:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:27.367 14:13:08 -- nvmf/common.sh@117 -- # sync 00:08:27.367 14:13:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:27.626 14:13:08 -- nvmf/common.sh@120 -- # set +e 00:08:27.626 14:13:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:27.626 14:13:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:27.626 rmmod nvme_tcp 00:08:27.626 rmmod nvme_fabrics 00:08:27.626 rmmod nvme_keyring 00:08:27.626 14:13:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:27.626 14:13:09 -- nvmf/common.sh@124 -- # set -e 00:08:27.626 14:13:09 -- nvmf/common.sh@125 -- # return 0 00:08:27.626 14:13:09 -- nvmf/common.sh@478 -- # '[' -n 3096063 ']' 00:08:27.626 14:13:09 -- nvmf/common.sh@479 -- # killprocess 3096063 00:08:27.626 14:13:09 -- common/autotest_common.sh@936 -- # '[' -z 3096063 ']' 00:08:27.626 14:13:09 -- common/autotest_common.sh@940 -- # kill -0 3096063 00:08:27.626 14:13:09 -- common/autotest_common.sh@941 -- # uname 00:08:27.626 14:13:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:27.626 14:13:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3096063 00:08:27.626 14:13:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:27.626 14:13:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:27.626 14:13:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3096063' 00:08:27.626 killing process with pid 3096063 00:08:27.626 14:13:09 -- common/autotest_common.sh@955 -- # kill 3096063 00:08:27.626 14:13:09 -- common/autotest_common.sh@960 -- # wait 3096063 00:08:27.885 14:13:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:27.885 14:13:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:27.885 14:13:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:27.885 14:13:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.885 14:13:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:27.885 14:13:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.885 14:13:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.885 14:13:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.792 14:13:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:29.792 00:08:29.792 real 0m38.776s 00:08:29.792 user 2m32.049s 00:08:29.792 sys 0m10.228s 00:08:29.792 14:13:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:29.792 14:13:11 -- common/autotest_common.sh@10 -- # set +x 00:08:29.792 ************************************ 00:08:29.792 END TEST nvmf_ns_hotplug_stress 00:08:29.792 ************************************ 00:08:29.792 14:13:11 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:29.792 14:13:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:29.792 14:13:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:29.792 14:13:11 -- common/autotest_common.sh@10 -- # set +x 00:08:30.050 ************************************ 00:08:30.050 START TEST nvmf_connect_stress 00:08:30.050 ************************************ 00:08:30.050 14:13:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:30.050 * Looking for test storage... 00:08:30.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.050 14:13:11 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.050 14:13:11 -- nvmf/common.sh@7 -- # uname -s 00:08:30.050 14:13:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.050 14:13:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.050 14:13:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.050 14:13:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.050 14:13:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.050 14:13:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.050 14:13:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.050 14:13:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.050 14:13:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.050 14:13:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.050 14:13:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:30.050 14:13:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:30.050 14:13:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.051 14:13:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.051 14:13:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.051 14:13:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.051 14:13:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.051 14:13:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.051 14:13:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.051 14:13:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.051 14:13:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.051 14:13:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.051 14:13:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.051 14:13:11 -- paths/export.sh@5 -- # export PATH 00:08:30.051 14:13:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.051 14:13:11 -- nvmf/common.sh@47 -- # : 0 00:08:30.051 14:13:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:30.051 14:13:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:30.051 14:13:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.051 14:13:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.051 14:13:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.051 14:13:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:30.051 14:13:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:30.051 14:13:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:30.051 14:13:11 -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:30.051 14:13:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:30.051 14:13:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.051 14:13:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:30.051 14:13:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:30.051 14:13:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:30.051 14:13:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.051 14:13:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.051 14:13:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.051 14:13:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:30.051 14:13:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:30.051 14:13:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:30.051 14:13:11 -- common/autotest_common.sh@10 -- # set +x 00:08:31.954 14:13:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:31.954 14:13:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:31.954 14:13:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:31.954 14:13:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:31.954 14:13:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:31.954 14:13:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:31.954 14:13:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:31.954 14:13:13 -- nvmf/common.sh@295 -- # net_devs=() 00:08:31.954 14:13:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:31.954 14:13:13 -- nvmf/common.sh@296 -- # e810=() 00:08:31.954 14:13:13 -- nvmf/common.sh@296 -- # local -ga e810 00:08:31.954 14:13:13 -- nvmf/common.sh@297 -- # x722=() 00:08:31.954 14:13:13 -- nvmf/common.sh@297 -- # local -ga x722 00:08:31.954 14:13:13 -- nvmf/common.sh@298 -- # mlx=() 00:08:31.954 14:13:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:31.954 14:13:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.954 14:13:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.954 14:13:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.954 14:13:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.954 14:13:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.954 14:13:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.954 14:13:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.954 14:13:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.954 14:13:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.954 14:13:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.954 14:13:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.954 14:13:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:31.954 14:13:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:31.954 14:13:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:31.954 14:13:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:31.954 14:13:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:31.954 14:13:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:31.954 14:13:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.954 14:13:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:31.954 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:31.954 14:13:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.954 14:13:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.954 14:13:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.954 14:13:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.954 14:13:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.954 14:13:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.954 14:13:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:31.954 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:31.954 14:13:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:31.954 14:13:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:31.954 14:13:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.954 14:13:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.954 14:13:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:31.954 14:13:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:31.954 14:13:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:31.954 14:13:13 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:31.954 14:13:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.954 14:13:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.954 14:13:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:31.954 14:13:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.954 14:13:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:31.954 Found net devices under 0000:08:00.0: cvl_0_0 00:08:31.954 14:13:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.954 14:13:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.954 14:13:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.954 14:13:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:31.954 14:13:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.954 14:13:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:31.954 Found net devices under 0000:08:00.1: cvl_0_1 00:08:31.954 14:13:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.954 14:13:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:31.954 14:13:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:31.954 14:13:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:31.954 14:13:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:31.954 14:13:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:31.954 14:13:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.954 14:13:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.954 14:13:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.954 14:13:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:31.954 14:13:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.954 14:13:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.954 14:13:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:31.954 14:13:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.954 14:13:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.954 14:13:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:31.954 14:13:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:31.954 14:13:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.954 14:13:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.954 14:13:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.954 14:13:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.954 14:13:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:31.954 14:13:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.954 14:13:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.954 14:13:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.954 14:13:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:31.954 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.954 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:08:31.954 00:08:31.954 --- 10.0.0.2 ping statistics --- 00:08:31.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.954 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:08:31.954 14:13:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:08:31.954 00:08:31.954 --- 10.0.0.1 ping statistics --- 00:08:31.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.954 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:08:31.954 14:13:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.954 14:13:13 -- nvmf/common.sh@411 -- # return 0 00:08:31.954 14:13:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:31.954 14:13:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.954 14:13:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:31.954 14:13:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:31.954 14:13:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.954 14:13:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:31.954 14:13:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:31.954 14:13:13 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:31.954 14:13:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:31.954 14:13:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:31.954 14:13:13 -- common/autotest_common.sh@10 -- # set +x 00:08:31.954 14:13:13 -- nvmf/common.sh@470 -- # nvmfpid=3100867 00:08:31.954 14:13:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:31.954 14:13:13 -- nvmf/common.sh@471 -- # waitforlisten 3100867 00:08:31.954 14:13:13 -- common/autotest_common.sh@817 -- # '[' -z 3100867 ']' 00:08:31.954 14:13:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.954 14:13:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:31.954 14:13:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.954 14:13:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:31.954 14:13:13 -- common/autotest_common.sh@10 -- # set +x 00:08:31.954 [2024-04-26 14:13:13.264012] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:08:31.954 [2024-04-26 14:13:13.264098] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.954 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.954 [2024-04-26 14:13:13.328694] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:31.954 [2024-04-26 14:13:13.443107] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.954 [2024-04-26 14:13:13.443168] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.954 [2024-04-26 14:13:13.443185] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.954 [2024-04-26 14:13:13.443199] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.954 [2024-04-26 14:13:13.443212] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.955 [2024-04-26 14:13:13.443297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.955 [2024-04-26 14:13:13.443356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.955 [2024-04-26 14:13:13.443352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.212 14:13:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:32.212 14:13:13 -- common/autotest_common.sh@850 -- # return 0 00:08:32.212 14:13:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:32.212 14:13:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:32.212 14:13:13 -- common/autotest_common.sh@10 -- # set +x 00:08:32.212 14:13:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.212 14:13:13 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:32.212 14:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.212 14:13:13 -- common/autotest_common.sh@10 -- # set +x 00:08:32.212 [2024-04-26 14:13:13.574728] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.212 14:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.212 14:13:13 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:32.212 14:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.212 14:13:13 -- common/autotest_common.sh@10 -- # set +x 00:08:32.212 14:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.212 14:13:13 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.212 14:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.212 14:13:13 -- common/autotest_common.sh@10 -- # set +x 00:08:32.212 [2024-04-26 14:13:13.611796] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.212 14:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.212 14:13:13 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:32.212 14:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.212 14:13:13 -- common/autotest_common.sh@10 -- # set +x 00:08:32.212 NULL1 00:08:32.212 14:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.212 14:13:13 -- target/connect_stress.sh@21 -- # PERF_PID=3100899 00:08:32.212 14:13:13 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:32.212 14:13:13 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:32.212 14:13:13 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # seq 1 20 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.212 14:13:13 -- target/connect_stress.sh@28 -- # cat 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.212 14:13:13 -- target/connect_stress.sh@28 -- # cat 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.212 14:13:13 -- target/connect_stress.sh@28 -- # cat 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.212 14:13:13 -- target/connect_stress.sh@28 -- # cat 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.212 14:13:13 -- target/connect_stress.sh@28 -- # cat 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.212 14:13:13 -- target/connect_stress.sh@28 -- # cat 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.212 14:13:13 -- target/connect_stress.sh@28 -- # cat 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.212 14:13:13 -- target/connect_stress.sh@28 -- # cat 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.212 14:13:13 -- target/connect_stress.sh@28 -- # cat 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.212 14:13:13 -- target/connect_stress.sh@28 -- # cat 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.212 14:13:13 -- target/connect_stress.sh@28 -- # cat 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.212 14:13:13 -- target/connect_stress.sh@28 -- # cat 00:08:32.212 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.212 14:13:13 -- target/connect_stress.sh@28 -- # cat 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.212 14:13:13 -- target/connect_stress.sh@28 -- # cat 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.212 14:13:13 -- target/connect_stress.sh@28 -- # cat 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.212 14:13:13 -- target/connect_stress.sh@28 -- # cat 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.212 14:13:13 -- target/connect_stress.sh@28 -- # cat 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.212 14:13:13 -- target/connect_stress.sh@28 -- # cat 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.212 14:13:13 -- target/connect_stress.sh@28 -- # cat 00:08:32.212 14:13:13 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:32.212 14:13:13 -- target/connect_stress.sh@28 -- # cat 00:08:32.212 14:13:13 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:32.212 14:13:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.212 14:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.212 14:13:13 -- common/autotest_common.sh@10 -- # set +x 00:08:32.470 14:13:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:32.470 14:13:13 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:32.470 14:13:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.470 14:13:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:32.470 14:13:13 -- common/autotest_common.sh@10 -- # set +x 00:08:33.036 14:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.036 14:13:14 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:33.036 14:13:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.036 14:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.036 14:13:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.294 14:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.294 14:13:14 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:33.294 14:13:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.294 14:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.294 14:13:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.552 14:13:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.552 14:13:14 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:33.552 14:13:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.552 14:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.552 14:13:14 -- common/autotest_common.sh@10 -- # set +x 00:08:33.810 14:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:33.810 14:13:15 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:33.810 14:13:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.810 14:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:33.810 14:13:15 -- common/autotest_common.sh@10 -- # set +x 00:08:34.067 14:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.067 14:13:15 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:34.067 14:13:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.067 14:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.067 14:13:15 -- common/autotest_common.sh@10 -- # set +x 00:08:34.633 14:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.633 14:13:15 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:34.633 14:13:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.633 14:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.633 14:13:15 -- common/autotest_common.sh@10 -- # set +x 00:08:34.890 14:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.890 14:13:16 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:34.890 14:13:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.890 14:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.890 14:13:16 -- common/autotest_common.sh@10 -- # set +x 00:08:35.148 14:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.148 14:13:16 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:35.148 14:13:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.148 14:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.148 14:13:16 -- common/autotest_common.sh@10 -- # set +x 00:08:35.405 14:13:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.405 14:13:16 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:35.405 14:13:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.405 14:13:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.405 14:13:16 -- common/autotest_common.sh@10 -- # set +x 00:08:35.663 14:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.663 14:13:17 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:35.663 14:13:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.663 14:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.663 14:13:17 -- common/autotest_common.sh@10 -- # set +x 00:08:36.228 14:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.228 14:13:17 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:36.228 14:13:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.228 14:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.228 14:13:17 -- common/autotest_common.sh@10 -- # set +x 00:08:36.485 14:13:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.485 14:13:17 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:36.485 14:13:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.485 14:13:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.485 14:13:17 -- common/autotest_common.sh@10 -- # set +x 00:08:36.742 14:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.742 14:13:18 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:36.742 14:13:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.742 14:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.742 14:13:18 -- common/autotest_common.sh@10 -- # set +x 00:08:36.999 14:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.999 14:13:18 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:36.999 14:13:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.999 14:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.999 14:13:18 -- common/autotest_common.sh@10 -- # set +x 00:08:37.257 14:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.257 14:13:18 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:37.257 14:13:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.257 14:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.257 14:13:18 -- common/autotest_common.sh@10 -- # set +x 00:08:37.846 14:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:37.846 14:13:19 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:37.846 14:13:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.846 14:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:37.846 14:13:19 -- common/autotest_common.sh@10 -- # set +x 00:08:38.103 14:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:38.103 14:13:19 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:38.103 14:13:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.103 14:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:38.103 14:13:19 -- common/autotest_common.sh@10 -- # set +x 00:08:38.360 14:13:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:38.360 14:13:19 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:38.360 14:13:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.360 14:13:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:38.360 14:13:19 -- common/autotest_common.sh@10 -- # set +x 00:08:38.616 14:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:38.616 14:13:20 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:38.616 14:13:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.616 14:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:38.616 14:13:20 -- common/autotest_common.sh@10 -- # set +x 00:08:38.873 14:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:38.873 14:13:20 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:38.873 14:13:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.873 14:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:38.873 14:13:20 -- common/autotest_common.sh@10 -- # set +x 00:08:39.437 14:13:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:39.437 14:13:20 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:39.437 14:13:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.437 14:13:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:39.437 14:13:20 -- common/autotest_common.sh@10 -- # set +x 00:08:39.695 14:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:39.695 14:13:21 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:39.695 14:13:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.695 14:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:39.695 14:13:21 -- common/autotest_common.sh@10 -- # set +x 00:08:39.952 14:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:39.952 14:13:21 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:39.952 14:13:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.952 14:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:39.952 14:13:21 -- common/autotest_common.sh@10 -- # set +x 00:08:40.210 14:13:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:40.210 14:13:21 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:40.210 14:13:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.210 14:13:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:40.210 14:13:21 -- common/autotest_common.sh@10 -- # set +x 00:08:40.468 14:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:40.468 14:13:22 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:40.468 14:13:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:40.468 14:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:40.468 14:13:22 -- common/autotest_common.sh@10 -- # set +x 00:08:41.034 14:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.034 14:13:22 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:41.034 14:13:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.034 14:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.034 14:13:22 -- common/autotest_common.sh@10 -- # set +x 00:08:41.291 14:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.291 14:13:22 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:41.291 14:13:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.291 14:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.291 14:13:22 -- common/autotest_common.sh@10 -- # set +x 00:08:41.549 14:13:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.549 14:13:22 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:41.549 14:13:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.549 14:13:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.549 14:13:22 -- common/autotest_common.sh@10 -- # set +x 00:08:41.806 14:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.806 14:13:23 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:41.806 14:13:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:41.806 14:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.806 14:13:23 -- common/autotest_common.sh@10 -- # set +x 00:08:42.064 14:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:42.064 14:13:23 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:42.064 14:13:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:42.064 14:13:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:42.064 14:13:23 -- common/autotest_common.sh@10 -- # set +x 00:08:42.320 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:42.577 14:13:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:42.577 14:13:23 -- target/connect_stress.sh@34 -- # kill -0 3100899 00:08:42.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3100899) - No such process 00:08:42.577 14:13:23 -- target/connect_stress.sh@38 -- # wait 3100899 00:08:42.577 14:13:23 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:42.577 14:13:23 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:42.577 14:13:23 -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:42.577 14:13:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:42.577 14:13:23 -- nvmf/common.sh@117 -- # sync 00:08:42.577 14:13:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:42.577 14:13:23 -- nvmf/common.sh@120 -- # set +e 00:08:42.577 14:13:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:42.577 14:13:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:42.577 rmmod nvme_tcp 00:08:42.577 rmmod nvme_fabrics 00:08:42.577 rmmod nvme_keyring 00:08:42.577 14:13:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:42.577 14:13:24 -- nvmf/common.sh@124 -- # set -e 00:08:42.577 14:13:24 -- nvmf/common.sh@125 -- # return 0 00:08:42.577 14:13:24 -- nvmf/common.sh@478 -- # '[' -n 3100867 ']' 00:08:42.577 14:13:24 -- nvmf/common.sh@479 -- # killprocess 3100867 00:08:42.578 14:13:24 -- common/autotest_common.sh@936 -- # '[' -z 3100867 ']' 00:08:42.578 14:13:24 -- common/autotest_common.sh@940 -- # kill -0 3100867 00:08:42.578 14:13:24 -- common/autotest_common.sh@941 -- # uname 00:08:42.578 14:13:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:42.578 14:13:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3100867 00:08:42.578 14:13:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:42.578 14:13:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:42.578 14:13:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3100867' 00:08:42.578 killing process with pid 3100867 00:08:42.578 14:13:24 -- common/autotest_common.sh@955 -- # kill 3100867 00:08:42.578 14:13:24 -- common/autotest_common.sh@960 -- # wait 3100867 00:08:42.837 14:13:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:42.837 14:13:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:42.837 14:13:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:42.837 14:13:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:42.837 14:13:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:42.837 14:13:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.837 14:13:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.837 14:13:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.746 14:13:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:44.746 00:08:44.746 real 0m14.878s 00:08:44.746 user 0m38.457s 00:08:44.746 sys 0m5.162s 00:08:44.746 14:13:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:44.746 14:13:26 -- common/autotest_common.sh@10 -- # set +x 00:08:44.746 ************************************ 00:08:44.746 END TEST nvmf_connect_stress 00:08:44.746 ************************************ 00:08:45.005 14:13:26 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:45.005 14:13:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:45.005 14:13:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.005 14:13:26 -- common/autotest_common.sh@10 -- # set +x 00:08:45.005 ************************************ 00:08:45.005 START TEST nvmf_fused_ordering 00:08:45.005 ************************************ 00:08:45.005 14:13:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:45.005 * Looking for test storage... 00:08:45.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.005 14:13:26 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.005 14:13:26 -- nvmf/common.sh@7 -- # uname -s 00:08:45.005 14:13:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.005 14:13:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.005 14:13:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.005 14:13:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.005 14:13:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.005 14:13:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.005 14:13:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.005 14:13:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.005 14:13:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.005 14:13:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.005 14:13:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:45.005 14:13:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:45.005 14:13:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.005 14:13:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.005 14:13:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.005 14:13:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.005 14:13:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.005 14:13:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.005 14:13:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.005 14:13:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.005 14:13:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.005 14:13:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.005 14:13:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.005 14:13:26 -- paths/export.sh@5 -- # export PATH 00:08:45.005 14:13:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.005 14:13:26 -- nvmf/common.sh@47 -- # : 0 00:08:45.005 14:13:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:45.005 14:13:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:45.005 14:13:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.005 14:13:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.005 14:13:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.005 14:13:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:45.005 14:13:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:45.005 14:13:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:45.005 14:13:26 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:45.005 14:13:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:45.005 14:13:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.005 14:13:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:45.005 14:13:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:45.005 14:13:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:45.005 14:13:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.006 14:13:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.006 14:13:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.006 14:13:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:45.006 14:13:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:45.006 14:13:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:45.006 14:13:26 -- common/autotest_common.sh@10 -- # set +x 00:08:46.982 14:13:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:46.982 14:13:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:46.982 14:13:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:46.982 14:13:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:46.982 14:13:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:46.982 14:13:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:46.982 14:13:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:46.982 14:13:28 -- nvmf/common.sh@295 -- # net_devs=() 00:08:46.982 14:13:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:46.982 14:13:28 -- nvmf/common.sh@296 -- # e810=() 00:08:46.982 14:13:28 -- nvmf/common.sh@296 -- # local -ga e810 00:08:46.982 14:13:28 -- nvmf/common.sh@297 -- # x722=() 00:08:46.982 14:13:28 -- nvmf/common.sh@297 -- # local -ga x722 00:08:46.982 14:13:28 -- nvmf/common.sh@298 -- # mlx=() 00:08:46.982 14:13:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:46.982 14:13:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.982 14:13:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.982 14:13:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.982 14:13:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.982 14:13:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.982 14:13:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.982 14:13:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.982 14:13:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.982 14:13:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.982 14:13:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.982 14:13:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.982 14:13:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:46.982 14:13:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:46.982 14:13:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:46.982 14:13:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:46.982 14:13:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:46.982 14:13:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:46.982 14:13:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:46.982 14:13:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:46.982 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:46.982 14:13:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:46.982 14:13:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:46.982 14:13:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.982 14:13:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.982 14:13:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:46.982 14:13:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:46.982 14:13:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:46.982 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:46.982 14:13:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:46.982 14:13:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:46.982 14:13:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.982 14:13:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.982 14:13:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:46.982 14:13:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:46.982 14:13:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:46.982 14:13:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:46.982 14:13:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:46.982 14:13:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.982 14:13:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:46.982 14:13:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.982 14:13:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:46.982 Found net devices under 0000:08:00.0: cvl_0_0 00:08:46.982 14:13:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.982 14:13:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:46.982 14:13:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.982 14:13:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:46.982 14:13:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.982 14:13:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:46.982 Found net devices under 0000:08:00.1: cvl_0_1 00:08:46.982 14:13:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.982 14:13:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:46.982 14:13:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:46.982 14:13:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:46.982 14:13:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:46.982 14:13:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:46.982 14:13:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.982 14:13:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.982 14:13:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.982 14:13:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:46.982 14:13:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.982 14:13:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.982 14:13:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:46.982 14:13:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.982 14:13:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.982 14:13:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:46.982 14:13:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:46.982 14:13:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.982 14:13:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.982 14:13:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.982 14:13:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.982 14:13:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:46.982 14:13:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.982 14:13:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.982 14:13:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.982 14:13:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:46.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:08:46.982 00:08:46.982 --- 10.0.0.2 ping statistics --- 00:08:46.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.982 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:08:46.982 14:13:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:08:46.982 00:08:46.982 --- 10.0.0.1 ping statistics --- 00:08:46.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.982 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:08:46.982 14:13:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.982 14:13:28 -- nvmf/common.sh@411 -- # return 0 00:08:46.982 14:13:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:46.982 14:13:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.982 14:13:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:46.982 14:13:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:46.982 14:13:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.982 14:13:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:46.982 14:13:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:46.982 14:13:28 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:46.982 14:13:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:46.982 14:13:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:46.982 14:13:28 -- common/autotest_common.sh@10 -- # set +x 00:08:46.982 14:13:28 -- nvmf/common.sh@470 -- # nvmfpid=3103417 00:08:46.982 14:13:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:46.982 14:13:28 -- nvmf/common.sh@471 -- # waitforlisten 3103417 00:08:46.982 14:13:28 -- common/autotest_common.sh@817 -- # '[' -z 3103417 ']' 00:08:46.982 14:13:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.982 14:13:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:46.982 14:13:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.982 14:13:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:46.982 14:13:28 -- common/autotest_common.sh@10 -- # set +x 00:08:46.983 [2024-04-26 14:13:28.259082] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:08:46.983 [2024-04-26 14:13:28.259182] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.983 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.983 [2024-04-26 14:13:28.327190] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.983 [2024-04-26 14:13:28.445147] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.983 [2024-04-26 14:13:28.445211] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.983 [2024-04-26 14:13:28.445227] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.983 [2024-04-26 14:13:28.445241] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.983 [2024-04-26 14:13:28.445252] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.983 [2024-04-26 14:13:28.445287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.240 14:13:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:47.240 14:13:28 -- common/autotest_common.sh@850 -- # return 0 00:08:47.240 14:13:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:47.240 14:13:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:47.241 14:13:28 -- common/autotest_common.sh@10 -- # set +x 00:08:47.241 14:13:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.241 14:13:28 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:47.241 14:13:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:47.241 14:13:28 -- common/autotest_common.sh@10 -- # set +x 00:08:47.241 [2024-04-26 14:13:28.582029] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.241 14:13:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:47.241 14:13:28 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:47.241 14:13:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:47.241 14:13:28 -- common/autotest_common.sh@10 -- # set +x 00:08:47.241 14:13:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:47.241 14:13:28 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.241 14:13:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:47.241 14:13:28 -- common/autotest_common.sh@10 -- # set +x 00:08:47.241 [2024-04-26 14:13:28.598180] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.241 14:13:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:47.241 14:13:28 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:47.241 14:13:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:47.241 14:13:28 -- common/autotest_common.sh@10 -- # set +x 00:08:47.241 NULL1 00:08:47.241 14:13:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:47.241 14:13:28 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:47.241 14:13:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:47.241 14:13:28 -- common/autotest_common.sh@10 -- # set +x 00:08:47.241 14:13:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:47.241 14:13:28 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:47.241 14:13:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:47.241 14:13:28 -- common/autotest_common.sh@10 -- # set +x 00:08:47.241 14:13:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:47.241 14:13:28 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:47.241 [2024-04-26 14:13:28.644629] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:08:47.241 [2024-04-26 14:13:28.644704] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3103446 ] 00:08:47.241 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.807 Attached to nqn.2016-06.io.spdk:cnode1 00:08:47.807 Namespace ID: 1 size: 1GB 00:08:47.807 fused_ordering(0) 00:08:47.807 fused_ordering(1) 00:08:47.807 fused_ordering(2) 00:08:47.807 fused_ordering(3) 00:08:47.807 fused_ordering(4) 00:08:47.807 fused_ordering(5) 00:08:47.807 fused_ordering(6) 00:08:47.807 fused_ordering(7) 00:08:47.807 fused_ordering(8) 00:08:47.807 fused_ordering(9) 00:08:47.807 fused_ordering(10) 00:08:47.807 fused_ordering(11) 00:08:47.807 fused_ordering(12) 00:08:47.807 fused_ordering(13) 00:08:47.807 fused_ordering(14) 00:08:47.807 fused_ordering(15) 00:08:47.807 fused_ordering(16) 00:08:47.807 fused_ordering(17) 00:08:47.807 fused_ordering(18) 00:08:47.807 fused_ordering(19) 00:08:47.807 fused_ordering(20) 00:08:47.807 fused_ordering(21) 00:08:47.807 fused_ordering(22) 00:08:47.807 fused_ordering(23) 00:08:47.807 fused_ordering(24) 00:08:47.807 fused_ordering(25) 00:08:47.807 fused_ordering(26) 00:08:47.807 fused_ordering(27) 00:08:47.807 fused_ordering(28) 00:08:47.807 fused_ordering(29) 00:08:47.807 fused_ordering(30) 00:08:47.807 fused_ordering(31) 00:08:47.807 fused_ordering(32) 00:08:47.807 fused_ordering(33) 00:08:47.807 fused_ordering(34) 00:08:47.807 fused_ordering(35) 00:08:47.807 fused_ordering(36) 00:08:47.807 fused_ordering(37) 00:08:47.807 fused_ordering(38) 00:08:47.807 fused_ordering(39) 00:08:47.807 fused_ordering(40) 00:08:47.807 fused_ordering(41) 00:08:47.807 fused_ordering(42) 00:08:47.807 fused_ordering(43) 00:08:47.807 fused_ordering(44) 00:08:47.807 fused_ordering(45) 00:08:47.807 fused_ordering(46) 00:08:47.807 fused_ordering(47) 00:08:47.807 fused_ordering(48) 00:08:47.807 fused_ordering(49) 00:08:47.807 fused_ordering(50) 00:08:47.807 fused_ordering(51) 00:08:47.807 fused_ordering(52) 00:08:47.807 fused_ordering(53) 00:08:47.807 fused_ordering(54) 00:08:47.807 fused_ordering(55) 00:08:47.807 fused_ordering(56) 00:08:47.807 fused_ordering(57) 00:08:47.807 fused_ordering(58) 00:08:47.807 fused_ordering(59) 00:08:47.807 fused_ordering(60) 00:08:47.807 fused_ordering(61) 00:08:47.807 fused_ordering(62) 00:08:47.807 fused_ordering(63) 00:08:47.807 fused_ordering(64) 00:08:47.807 fused_ordering(65) 00:08:47.807 fused_ordering(66) 00:08:47.807 fused_ordering(67) 00:08:47.807 fused_ordering(68) 00:08:47.807 fused_ordering(69) 00:08:47.807 fused_ordering(70) 00:08:47.807 fused_ordering(71) 00:08:47.807 fused_ordering(72) 00:08:47.807 fused_ordering(73) 00:08:47.807 fused_ordering(74) 00:08:47.807 fused_ordering(75) 00:08:47.807 fused_ordering(76) 00:08:47.807 fused_ordering(77) 00:08:47.807 fused_ordering(78) 00:08:47.807 fused_ordering(79) 00:08:47.807 fused_ordering(80) 00:08:47.807 fused_ordering(81) 00:08:47.807 fused_ordering(82) 00:08:47.807 fused_ordering(83) 00:08:47.807 fused_ordering(84) 00:08:47.807 fused_ordering(85) 00:08:47.807 fused_ordering(86) 00:08:47.807 fused_ordering(87) 00:08:47.807 fused_ordering(88) 00:08:47.807 fused_ordering(89) 00:08:47.807 fused_ordering(90) 00:08:47.807 fused_ordering(91) 00:08:47.807 fused_ordering(92) 00:08:47.807 fused_ordering(93) 00:08:47.807 fused_ordering(94) 00:08:47.807 fused_ordering(95) 00:08:47.807 fused_ordering(96) 00:08:47.807 fused_ordering(97) 00:08:47.807 fused_ordering(98) 00:08:47.807 fused_ordering(99) 00:08:47.807 fused_ordering(100) 00:08:47.807 fused_ordering(101) 00:08:47.807 fused_ordering(102) 00:08:47.807 fused_ordering(103) 00:08:47.807 fused_ordering(104) 00:08:47.807 fused_ordering(105) 00:08:47.807 fused_ordering(106) 00:08:47.807 fused_ordering(107) 00:08:47.807 fused_ordering(108) 00:08:47.807 fused_ordering(109) 00:08:47.807 fused_ordering(110) 00:08:47.807 fused_ordering(111) 00:08:47.807 fused_ordering(112) 00:08:47.807 fused_ordering(113) 00:08:47.807 fused_ordering(114) 00:08:47.807 fused_ordering(115) 00:08:47.807 fused_ordering(116) 00:08:47.807 fused_ordering(117) 00:08:47.807 fused_ordering(118) 00:08:47.807 fused_ordering(119) 00:08:47.807 fused_ordering(120) 00:08:47.807 fused_ordering(121) 00:08:47.807 fused_ordering(122) 00:08:47.807 fused_ordering(123) 00:08:47.807 fused_ordering(124) 00:08:47.807 fused_ordering(125) 00:08:47.807 fused_ordering(126) 00:08:47.807 fused_ordering(127) 00:08:47.807 fused_ordering(128) 00:08:47.807 fused_ordering(129) 00:08:47.807 fused_ordering(130) 00:08:47.807 fused_ordering(131) 00:08:47.807 fused_ordering(132) 00:08:47.807 fused_ordering(133) 00:08:47.807 fused_ordering(134) 00:08:47.807 fused_ordering(135) 00:08:47.807 fused_ordering(136) 00:08:47.807 fused_ordering(137) 00:08:47.807 fused_ordering(138) 00:08:47.807 fused_ordering(139) 00:08:47.807 fused_ordering(140) 00:08:47.807 fused_ordering(141) 00:08:47.807 fused_ordering(142) 00:08:47.807 fused_ordering(143) 00:08:47.807 fused_ordering(144) 00:08:47.807 fused_ordering(145) 00:08:47.807 fused_ordering(146) 00:08:47.807 fused_ordering(147) 00:08:47.807 fused_ordering(148) 00:08:47.807 fused_ordering(149) 00:08:47.807 fused_ordering(150) 00:08:47.807 fused_ordering(151) 00:08:47.807 fused_ordering(152) 00:08:47.807 fused_ordering(153) 00:08:47.807 fused_ordering(154) 00:08:47.807 fused_ordering(155) 00:08:47.807 fused_ordering(156) 00:08:47.807 fused_ordering(157) 00:08:47.807 fused_ordering(158) 00:08:47.807 fused_ordering(159) 00:08:47.807 fused_ordering(160) 00:08:47.807 fused_ordering(161) 00:08:47.807 fused_ordering(162) 00:08:47.807 fused_ordering(163) 00:08:47.808 fused_ordering(164) 00:08:47.808 fused_ordering(165) 00:08:47.808 fused_ordering(166) 00:08:47.808 fused_ordering(167) 00:08:47.808 fused_ordering(168) 00:08:47.808 fused_ordering(169) 00:08:47.808 fused_ordering(170) 00:08:47.808 fused_ordering(171) 00:08:47.808 fused_ordering(172) 00:08:47.808 fused_ordering(173) 00:08:47.808 fused_ordering(174) 00:08:47.808 fused_ordering(175) 00:08:47.808 fused_ordering(176) 00:08:47.808 fused_ordering(177) 00:08:47.808 fused_ordering(178) 00:08:47.808 fused_ordering(179) 00:08:47.808 fused_ordering(180) 00:08:47.808 fused_ordering(181) 00:08:47.808 fused_ordering(182) 00:08:47.808 fused_ordering(183) 00:08:47.808 fused_ordering(184) 00:08:47.808 fused_ordering(185) 00:08:47.808 fused_ordering(186) 00:08:47.808 fused_ordering(187) 00:08:47.808 fused_ordering(188) 00:08:47.808 fused_ordering(189) 00:08:47.808 fused_ordering(190) 00:08:47.808 fused_ordering(191) 00:08:47.808 fused_ordering(192) 00:08:47.808 fused_ordering(193) 00:08:47.808 fused_ordering(194) 00:08:47.808 fused_ordering(195) 00:08:47.808 fused_ordering(196) 00:08:47.808 fused_ordering(197) 00:08:47.808 fused_ordering(198) 00:08:47.808 fused_ordering(199) 00:08:47.808 fused_ordering(200) 00:08:47.808 fused_ordering(201) 00:08:47.808 fused_ordering(202) 00:08:47.808 fused_ordering(203) 00:08:47.808 fused_ordering(204) 00:08:47.808 fused_ordering(205) 00:08:48.066 fused_ordering(206) 00:08:48.066 fused_ordering(207) 00:08:48.066 fused_ordering(208) 00:08:48.066 fused_ordering(209) 00:08:48.066 fused_ordering(210) 00:08:48.066 fused_ordering(211) 00:08:48.066 fused_ordering(212) 00:08:48.066 fused_ordering(213) 00:08:48.066 fused_ordering(214) 00:08:48.066 fused_ordering(215) 00:08:48.066 fused_ordering(216) 00:08:48.066 fused_ordering(217) 00:08:48.066 fused_ordering(218) 00:08:48.066 fused_ordering(219) 00:08:48.066 fused_ordering(220) 00:08:48.066 fused_ordering(221) 00:08:48.066 fused_ordering(222) 00:08:48.066 fused_ordering(223) 00:08:48.066 fused_ordering(224) 00:08:48.066 fused_ordering(225) 00:08:48.066 fused_ordering(226) 00:08:48.066 fused_ordering(227) 00:08:48.066 fused_ordering(228) 00:08:48.066 fused_ordering(229) 00:08:48.066 fused_ordering(230) 00:08:48.066 fused_ordering(231) 00:08:48.066 fused_ordering(232) 00:08:48.066 fused_ordering(233) 00:08:48.066 fused_ordering(234) 00:08:48.066 fused_ordering(235) 00:08:48.066 fused_ordering(236) 00:08:48.066 fused_ordering(237) 00:08:48.066 fused_ordering(238) 00:08:48.066 fused_ordering(239) 00:08:48.066 fused_ordering(240) 00:08:48.066 fused_ordering(241) 00:08:48.066 fused_ordering(242) 00:08:48.066 fused_ordering(243) 00:08:48.066 fused_ordering(244) 00:08:48.066 fused_ordering(245) 00:08:48.066 fused_ordering(246) 00:08:48.066 fused_ordering(247) 00:08:48.066 fused_ordering(248) 00:08:48.066 fused_ordering(249) 00:08:48.066 fused_ordering(250) 00:08:48.066 fused_ordering(251) 00:08:48.066 fused_ordering(252) 00:08:48.066 fused_ordering(253) 00:08:48.066 fused_ordering(254) 00:08:48.066 fused_ordering(255) 00:08:48.066 fused_ordering(256) 00:08:48.066 fused_ordering(257) 00:08:48.066 fused_ordering(258) 00:08:48.066 fused_ordering(259) 00:08:48.066 fused_ordering(260) 00:08:48.066 fused_ordering(261) 00:08:48.066 fused_ordering(262) 00:08:48.066 fused_ordering(263) 00:08:48.066 fused_ordering(264) 00:08:48.066 fused_ordering(265) 00:08:48.066 fused_ordering(266) 00:08:48.066 fused_ordering(267) 00:08:48.066 fused_ordering(268) 00:08:48.066 fused_ordering(269) 00:08:48.066 fused_ordering(270) 00:08:48.066 fused_ordering(271) 00:08:48.066 fused_ordering(272) 00:08:48.066 fused_ordering(273) 00:08:48.066 fused_ordering(274) 00:08:48.066 fused_ordering(275) 00:08:48.066 fused_ordering(276) 00:08:48.066 fused_ordering(277) 00:08:48.066 fused_ordering(278) 00:08:48.066 fused_ordering(279) 00:08:48.066 fused_ordering(280) 00:08:48.066 fused_ordering(281) 00:08:48.066 fused_ordering(282) 00:08:48.066 fused_ordering(283) 00:08:48.066 fused_ordering(284) 00:08:48.066 fused_ordering(285) 00:08:48.066 fused_ordering(286) 00:08:48.066 fused_ordering(287) 00:08:48.066 fused_ordering(288) 00:08:48.066 fused_ordering(289) 00:08:48.066 fused_ordering(290) 00:08:48.066 fused_ordering(291) 00:08:48.066 fused_ordering(292) 00:08:48.066 fused_ordering(293) 00:08:48.066 fused_ordering(294) 00:08:48.066 fused_ordering(295) 00:08:48.066 fused_ordering(296) 00:08:48.066 fused_ordering(297) 00:08:48.066 fused_ordering(298) 00:08:48.066 fused_ordering(299) 00:08:48.066 fused_ordering(300) 00:08:48.066 fused_ordering(301) 00:08:48.066 fused_ordering(302) 00:08:48.066 fused_ordering(303) 00:08:48.066 fused_ordering(304) 00:08:48.066 fused_ordering(305) 00:08:48.066 fused_ordering(306) 00:08:48.066 fused_ordering(307) 00:08:48.066 fused_ordering(308) 00:08:48.067 fused_ordering(309) 00:08:48.067 fused_ordering(310) 00:08:48.067 fused_ordering(311) 00:08:48.067 fused_ordering(312) 00:08:48.067 fused_ordering(313) 00:08:48.067 fused_ordering(314) 00:08:48.067 fused_ordering(315) 00:08:48.067 fused_ordering(316) 00:08:48.067 fused_ordering(317) 00:08:48.067 fused_ordering(318) 00:08:48.067 fused_ordering(319) 00:08:48.067 fused_ordering(320) 00:08:48.067 fused_ordering(321) 00:08:48.067 fused_ordering(322) 00:08:48.067 fused_ordering(323) 00:08:48.067 fused_ordering(324) 00:08:48.067 fused_ordering(325) 00:08:48.067 fused_ordering(326) 00:08:48.067 fused_ordering(327) 00:08:48.067 fused_ordering(328) 00:08:48.067 fused_ordering(329) 00:08:48.067 fused_ordering(330) 00:08:48.067 fused_ordering(331) 00:08:48.067 fused_ordering(332) 00:08:48.067 fused_ordering(333) 00:08:48.067 fused_ordering(334) 00:08:48.067 fused_ordering(335) 00:08:48.067 fused_ordering(336) 00:08:48.067 fused_ordering(337) 00:08:48.067 fused_ordering(338) 00:08:48.067 fused_ordering(339) 00:08:48.067 fused_ordering(340) 00:08:48.067 fused_ordering(341) 00:08:48.067 fused_ordering(342) 00:08:48.067 fused_ordering(343) 00:08:48.067 fused_ordering(344) 00:08:48.067 fused_ordering(345) 00:08:48.067 fused_ordering(346) 00:08:48.067 fused_ordering(347) 00:08:48.067 fused_ordering(348) 00:08:48.067 fused_ordering(349) 00:08:48.067 fused_ordering(350) 00:08:48.067 fused_ordering(351) 00:08:48.067 fused_ordering(352) 00:08:48.067 fused_ordering(353) 00:08:48.067 fused_ordering(354) 00:08:48.067 fused_ordering(355) 00:08:48.067 fused_ordering(356) 00:08:48.067 fused_ordering(357) 00:08:48.067 fused_ordering(358) 00:08:48.067 fused_ordering(359) 00:08:48.067 fused_ordering(360) 00:08:48.067 fused_ordering(361) 00:08:48.067 fused_ordering(362) 00:08:48.067 fused_ordering(363) 00:08:48.067 fused_ordering(364) 00:08:48.067 fused_ordering(365) 00:08:48.067 fused_ordering(366) 00:08:48.067 fused_ordering(367) 00:08:48.067 fused_ordering(368) 00:08:48.067 fused_ordering(369) 00:08:48.067 fused_ordering(370) 00:08:48.067 fused_ordering(371) 00:08:48.067 fused_ordering(372) 00:08:48.067 fused_ordering(373) 00:08:48.067 fused_ordering(374) 00:08:48.067 fused_ordering(375) 00:08:48.067 fused_ordering(376) 00:08:48.067 fused_ordering(377) 00:08:48.067 fused_ordering(378) 00:08:48.067 fused_ordering(379) 00:08:48.067 fused_ordering(380) 00:08:48.067 fused_ordering(381) 00:08:48.067 fused_ordering(382) 00:08:48.067 fused_ordering(383) 00:08:48.067 fused_ordering(384) 00:08:48.067 fused_ordering(385) 00:08:48.067 fused_ordering(386) 00:08:48.067 fused_ordering(387) 00:08:48.067 fused_ordering(388) 00:08:48.067 fused_ordering(389) 00:08:48.067 fused_ordering(390) 00:08:48.067 fused_ordering(391) 00:08:48.067 fused_ordering(392) 00:08:48.067 fused_ordering(393) 00:08:48.067 fused_ordering(394) 00:08:48.067 fused_ordering(395) 00:08:48.067 fused_ordering(396) 00:08:48.067 fused_ordering(397) 00:08:48.067 fused_ordering(398) 00:08:48.067 fused_ordering(399) 00:08:48.067 fused_ordering(400) 00:08:48.067 fused_ordering(401) 00:08:48.067 fused_ordering(402) 00:08:48.067 fused_ordering(403) 00:08:48.067 fused_ordering(404) 00:08:48.067 fused_ordering(405) 00:08:48.067 fused_ordering(406) 00:08:48.067 fused_ordering(407) 00:08:48.067 fused_ordering(408) 00:08:48.067 fused_ordering(409) 00:08:48.067 fused_ordering(410) 00:08:48.633 fused_ordering(411) 00:08:48.633 fused_ordering(412) 00:08:48.633 fused_ordering(413) 00:08:48.633 fused_ordering(414) 00:08:48.633 fused_ordering(415) 00:08:48.633 fused_ordering(416) 00:08:48.633 fused_ordering(417) 00:08:48.633 fused_ordering(418) 00:08:48.633 fused_ordering(419) 00:08:48.633 fused_ordering(420) 00:08:48.633 fused_ordering(421) 00:08:48.633 fused_ordering(422) 00:08:48.633 fused_ordering(423) 00:08:48.633 fused_ordering(424) 00:08:48.633 fused_ordering(425) 00:08:48.633 fused_ordering(426) 00:08:48.633 fused_ordering(427) 00:08:48.633 fused_ordering(428) 00:08:48.633 fused_ordering(429) 00:08:48.633 fused_ordering(430) 00:08:48.633 fused_ordering(431) 00:08:48.633 fused_ordering(432) 00:08:48.633 fused_ordering(433) 00:08:48.633 fused_ordering(434) 00:08:48.633 fused_ordering(435) 00:08:48.633 fused_ordering(436) 00:08:48.633 fused_ordering(437) 00:08:48.633 fused_ordering(438) 00:08:48.633 fused_ordering(439) 00:08:48.633 fused_ordering(440) 00:08:48.633 fused_ordering(441) 00:08:48.633 fused_ordering(442) 00:08:48.633 fused_ordering(443) 00:08:48.633 fused_ordering(444) 00:08:48.633 fused_ordering(445) 00:08:48.633 fused_ordering(446) 00:08:48.633 fused_ordering(447) 00:08:48.633 fused_ordering(448) 00:08:48.633 fused_ordering(449) 00:08:48.633 fused_ordering(450) 00:08:48.633 fused_ordering(451) 00:08:48.633 fused_ordering(452) 00:08:48.633 fused_ordering(453) 00:08:48.633 fused_ordering(454) 00:08:48.633 fused_ordering(455) 00:08:48.633 fused_ordering(456) 00:08:48.633 fused_ordering(457) 00:08:48.633 fused_ordering(458) 00:08:48.633 fused_ordering(459) 00:08:48.633 fused_ordering(460) 00:08:48.633 fused_ordering(461) 00:08:48.633 fused_ordering(462) 00:08:48.633 fused_ordering(463) 00:08:48.633 fused_ordering(464) 00:08:48.633 fused_ordering(465) 00:08:48.633 fused_ordering(466) 00:08:48.633 fused_ordering(467) 00:08:48.633 fused_ordering(468) 00:08:48.633 fused_ordering(469) 00:08:48.633 fused_ordering(470) 00:08:48.633 fused_ordering(471) 00:08:48.633 fused_ordering(472) 00:08:48.633 fused_ordering(473) 00:08:48.633 fused_ordering(474) 00:08:48.633 fused_ordering(475) 00:08:48.633 fused_ordering(476) 00:08:48.633 fused_ordering(477) 00:08:48.633 fused_ordering(478) 00:08:48.633 fused_ordering(479) 00:08:48.633 fused_ordering(480) 00:08:48.633 fused_ordering(481) 00:08:48.633 fused_ordering(482) 00:08:48.633 fused_ordering(483) 00:08:48.633 fused_ordering(484) 00:08:48.633 fused_ordering(485) 00:08:48.633 fused_ordering(486) 00:08:48.633 fused_ordering(487) 00:08:48.633 fused_ordering(488) 00:08:48.633 fused_ordering(489) 00:08:48.633 fused_ordering(490) 00:08:48.633 fused_ordering(491) 00:08:48.633 fused_ordering(492) 00:08:48.633 fused_ordering(493) 00:08:48.633 fused_ordering(494) 00:08:48.633 fused_ordering(495) 00:08:48.633 fused_ordering(496) 00:08:48.633 fused_ordering(497) 00:08:48.633 fused_ordering(498) 00:08:48.633 fused_ordering(499) 00:08:48.633 fused_ordering(500) 00:08:48.633 fused_ordering(501) 00:08:48.633 fused_ordering(502) 00:08:48.633 fused_ordering(503) 00:08:48.633 fused_ordering(504) 00:08:48.633 fused_ordering(505) 00:08:48.633 fused_ordering(506) 00:08:48.633 fused_ordering(507) 00:08:48.633 fused_ordering(508) 00:08:48.633 fused_ordering(509) 00:08:48.633 fused_ordering(510) 00:08:48.633 fused_ordering(511) 00:08:48.633 fused_ordering(512) 00:08:48.633 fused_ordering(513) 00:08:48.633 fused_ordering(514) 00:08:48.633 fused_ordering(515) 00:08:48.633 fused_ordering(516) 00:08:48.633 fused_ordering(517) 00:08:48.633 fused_ordering(518) 00:08:48.633 fused_ordering(519) 00:08:48.633 fused_ordering(520) 00:08:48.633 fused_ordering(521) 00:08:48.633 fused_ordering(522) 00:08:48.633 fused_ordering(523) 00:08:48.633 fused_ordering(524) 00:08:48.633 fused_ordering(525) 00:08:48.633 fused_ordering(526) 00:08:48.633 fused_ordering(527) 00:08:48.633 fused_ordering(528) 00:08:48.633 fused_ordering(529) 00:08:48.633 fused_ordering(530) 00:08:48.633 fused_ordering(531) 00:08:48.633 fused_ordering(532) 00:08:48.633 fused_ordering(533) 00:08:48.633 fused_ordering(534) 00:08:48.633 fused_ordering(535) 00:08:48.633 fused_ordering(536) 00:08:48.633 fused_ordering(537) 00:08:48.633 fused_ordering(538) 00:08:48.633 fused_ordering(539) 00:08:48.633 fused_ordering(540) 00:08:48.633 fused_ordering(541) 00:08:48.633 fused_ordering(542) 00:08:48.633 fused_ordering(543) 00:08:48.633 fused_ordering(544) 00:08:48.633 fused_ordering(545) 00:08:48.633 fused_ordering(546) 00:08:48.633 fused_ordering(547) 00:08:48.633 fused_ordering(548) 00:08:48.633 fused_ordering(549) 00:08:48.633 fused_ordering(550) 00:08:48.633 fused_ordering(551) 00:08:48.633 fused_ordering(552) 00:08:48.633 fused_ordering(553) 00:08:48.633 fused_ordering(554) 00:08:48.633 fused_ordering(555) 00:08:48.633 fused_ordering(556) 00:08:48.633 fused_ordering(557) 00:08:48.633 fused_ordering(558) 00:08:48.633 fused_ordering(559) 00:08:48.633 fused_ordering(560) 00:08:48.633 fused_ordering(561) 00:08:48.633 fused_ordering(562) 00:08:48.633 fused_ordering(563) 00:08:48.633 fused_ordering(564) 00:08:48.633 fused_ordering(565) 00:08:48.633 fused_ordering(566) 00:08:48.633 fused_ordering(567) 00:08:48.633 fused_ordering(568) 00:08:48.633 fused_ordering(569) 00:08:48.633 fused_ordering(570) 00:08:48.633 fused_ordering(571) 00:08:48.633 fused_ordering(572) 00:08:48.633 fused_ordering(573) 00:08:48.633 fused_ordering(574) 00:08:48.633 fused_ordering(575) 00:08:48.633 fused_ordering(576) 00:08:48.633 fused_ordering(577) 00:08:48.633 fused_ordering(578) 00:08:48.633 fused_ordering(579) 00:08:48.633 fused_ordering(580) 00:08:48.633 fused_ordering(581) 00:08:48.633 fused_ordering(582) 00:08:48.633 fused_ordering(583) 00:08:48.633 fused_ordering(584) 00:08:48.633 fused_ordering(585) 00:08:48.633 fused_ordering(586) 00:08:48.633 fused_ordering(587) 00:08:48.633 fused_ordering(588) 00:08:48.633 fused_ordering(589) 00:08:48.633 fused_ordering(590) 00:08:48.633 fused_ordering(591) 00:08:48.633 fused_ordering(592) 00:08:48.633 fused_ordering(593) 00:08:48.633 fused_ordering(594) 00:08:48.633 fused_ordering(595) 00:08:48.633 fused_ordering(596) 00:08:48.633 fused_ordering(597) 00:08:48.633 fused_ordering(598) 00:08:48.633 fused_ordering(599) 00:08:48.633 fused_ordering(600) 00:08:48.633 fused_ordering(601) 00:08:48.633 fused_ordering(602) 00:08:48.633 fused_ordering(603) 00:08:48.633 fused_ordering(604) 00:08:48.633 fused_ordering(605) 00:08:48.633 fused_ordering(606) 00:08:48.633 fused_ordering(607) 00:08:48.633 fused_ordering(608) 00:08:48.633 fused_ordering(609) 00:08:48.633 fused_ordering(610) 00:08:48.633 fused_ordering(611) 00:08:48.633 fused_ordering(612) 00:08:48.633 fused_ordering(613) 00:08:48.633 fused_ordering(614) 00:08:48.633 fused_ordering(615) 00:08:49.199 fused_ordering(616) 00:08:49.199 fused_ordering(617) 00:08:49.199 fused_ordering(618) 00:08:49.199 fused_ordering(619) 00:08:49.199 fused_ordering(620) 00:08:49.199 fused_ordering(621) 00:08:49.199 fused_ordering(622) 00:08:49.199 fused_ordering(623) 00:08:49.199 fused_ordering(624) 00:08:49.199 fused_ordering(625) 00:08:49.199 fused_ordering(626) 00:08:49.199 fused_ordering(627) 00:08:49.199 fused_ordering(628) 00:08:49.199 fused_ordering(629) 00:08:49.199 fused_ordering(630) 00:08:49.199 fused_ordering(631) 00:08:49.199 fused_ordering(632) 00:08:49.199 fused_ordering(633) 00:08:49.199 fused_ordering(634) 00:08:49.199 fused_ordering(635) 00:08:49.199 fused_ordering(636) 00:08:49.199 fused_ordering(637) 00:08:49.199 fused_ordering(638) 00:08:49.199 fused_ordering(639) 00:08:49.199 fused_ordering(640) 00:08:49.199 fused_ordering(641) 00:08:49.199 fused_ordering(642) 00:08:49.199 fused_ordering(643) 00:08:49.199 fused_ordering(644) 00:08:49.199 fused_ordering(645) 00:08:49.199 fused_ordering(646) 00:08:49.199 fused_ordering(647) 00:08:49.199 fused_ordering(648) 00:08:49.199 fused_ordering(649) 00:08:49.199 fused_ordering(650) 00:08:49.199 fused_ordering(651) 00:08:49.199 fused_ordering(652) 00:08:49.199 fused_ordering(653) 00:08:49.199 fused_ordering(654) 00:08:49.199 fused_ordering(655) 00:08:49.199 fused_ordering(656) 00:08:49.199 fused_ordering(657) 00:08:49.199 fused_ordering(658) 00:08:49.199 fused_ordering(659) 00:08:49.199 fused_ordering(660) 00:08:49.199 fused_ordering(661) 00:08:49.199 fused_ordering(662) 00:08:49.199 fused_ordering(663) 00:08:49.199 fused_ordering(664) 00:08:49.199 fused_ordering(665) 00:08:49.199 fused_ordering(666) 00:08:49.199 fused_ordering(667) 00:08:49.199 fused_ordering(668) 00:08:49.199 fused_ordering(669) 00:08:49.199 fused_ordering(670) 00:08:49.199 fused_ordering(671) 00:08:49.199 fused_ordering(672) 00:08:49.199 fused_ordering(673) 00:08:49.199 fused_ordering(674) 00:08:49.199 fused_ordering(675) 00:08:49.199 fused_ordering(676) 00:08:49.199 fused_ordering(677) 00:08:49.199 fused_ordering(678) 00:08:49.199 fused_ordering(679) 00:08:49.199 fused_ordering(680) 00:08:49.199 fused_ordering(681) 00:08:49.199 fused_ordering(682) 00:08:49.199 fused_ordering(683) 00:08:49.199 fused_ordering(684) 00:08:49.199 fused_ordering(685) 00:08:49.199 fused_ordering(686) 00:08:49.199 fused_ordering(687) 00:08:49.199 fused_ordering(688) 00:08:49.199 fused_ordering(689) 00:08:49.199 fused_ordering(690) 00:08:49.199 fused_ordering(691) 00:08:49.199 fused_ordering(692) 00:08:49.199 fused_ordering(693) 00:08:49.199 fused_ordering(694) 00:08:49.199 fused_ordering(695) 00:08:49.199 fused_ordering(696) 00:08:49.199 fused_ordering(697) 00:08:49.199 fused_ordering(698) 00:08:49.199 fused_ordering(699) 00:08:49.199 fused_ordering(700) 00:08:49.199 fused_ordering(701) 00:08:49.199 fused_ordering(702) 00:08:49.199 fused_ordering(703) 00:08:49.199 fused_ordering(704) 00:08:49.199 fused_ordering(705) 00:08:49.199 fused_ordering(706) 00:08:49.199 fused_ordering(707) 00:08:49.199 fused_ordering(708) 00:08:49.199 fused_ordering(709) 00:08:49.199 fused_ordering(710) 00:08:49.199 fused_ordering(711) 00:08:49.199 fused_ordering(712) 00:08:49.199 fused_ordering(713) 00:08:49.199 fused_ordering(714) 00:08:49.199 fused_ordering(715) 00:08:49.199 fused_ordering(716) 00:08:49.199 fused_ordering(717) 00:08:49.199 fused_ordering(718) 00:08:49.199 fused_ordering(719) 00:08:49.199 fused_ordering(720) 00:08:49.199 fused_ordering(721) 00:08:49.199 fused_ordering(722) 00:08:49.199 fused_ordering(723) 00:08:49.199 fused_ordering(724) 00:08:49.199 fused_ordering(725) 00:08:49.199 fused_ordering(726) 00:08:49.199 fused_ordering(727) 00:08:49.199 fused_ordering(728) 00:08:49.199 fused_ordering(729) 00:08:49.199 fused_ordering(730) 00:08:49.199 fused_ordering(731) 00:08:49.199 fused_ordering(732) 00:08:49.199 fused_ordering(733) 00:08:49.199 fused_ordering(734) 00:08:49.199 fused_ordering(735) 00:08:49.199 fused_ordering(736) 00:08:49.199 fused_ordering(737) 00:08:49.199 fused_ordering(738) 00:08:49.199 fused_ordering(739) 00:08:49.199 fused_ordering(740) 00:08:49.199 fused_ordering(741) 00:08:49.199 fused_ordering(742) 00:08:49.199 fused_ordering(743) 00:08:49.199 fused_ordering(744) 00:08:49.199 fused_ordering(745) 00:08:49.199 fused_ordering(746) 00:08:49.199 fused_ordering(747) 00:08:49.199 fused_ordering(748) 00:08:49.199 fused_ordering(749) 00:08:49.199 fused_ordering(750) 00:08:49.199 fused_ordering(751) 00:08:49.199 fused_ordering(752) 00:08:49.199 fused_ordering(753) 00:08:49.199 fused_ordering(754) 00:08:49.199 fused_ordering(755) 00:08:49.199 fused_ordering(756) 00:08:49.199 fused_ordering(757) 00:08:49.199 fused_ordering(758) 00:08:49.199 fused_ordering(759) 00:08:49.199 fused_ordering(760) 00:08:49.199 fused_ordering(761) 00:08:49.199 fused_ordering(762) 00:08:49.199 fused_ordering(763) 00:08:49.199 fused_ordering(764) 00:08:49.199 fused_ordering(765) 00:08:49.199 fused_ordering(766) 00:08:49.199 fused_ordering(767) 00:08:49.199 fused_ordering(768) 00:08:49.199 fused_ordering(769) 00:08:49.199 fused_ordering(770) 00:08:49.199 fused_ordering(771) 00:08:49.199 fused_ordering(772) 00:08:49.199 fused_ordering(773) 00:08:49.199 fused_ordering(774) 00:08:49.199 fused_ordering(775) 00:08:49.199 fused_ordering(776) 00:08:49.199 fused_ordering(777) 00:08:49.199 fused_ordering(778) 00:08:49.199 fused_ordering(779) 00:08:49.199 fused_ordering(780) 00:08:49.199 fused_ordering(781) 00:08:49.199 fused_ordering(782) 00:08:49.199 fused_ordering(783) 00:08:49.199 fused_ordering(784) 00:08:49.199 fused_ordering(785) 00:08:49.199 fused_ordering(786) 00:08:49.199 fused_ordering(787) 00:08:49.199 fused_ordering(788) 00:08:49.199 fused_ordering(789) 00:08:49.199 fused_ordering(790) 00:08:49.199 fused_ordering(791) 00:08:49.199 fused_ordering(792) 00:08:49.199 fused_ordering(793) 00:08:49.200 fused_ordering(794) 00:08:49.200 fused_ordering(795) 00:08:49.200 fused_ordering(796) 00:08:49.200 fused_ordering(797) 00:08:49.200 fused_ordering(798) 00:08:49.200 fused_ordering(799) 00:08:49.200 fused_ordering(800) 00:08:49.200 fused_ordering(801) 00:08:49.200 fused_ordering(802) 00:08:49.200 fused_ordering(803) 00:08:49.200 fused_ordering(804) 00:08:49.200 fused_ordering(805) 00:08:49.200 fused_ordering(806) 00:08:49.200 fused_ordering(807) 00:08:49.200 fused_ordering(808) 00:08:49.200 fused_ordering(809) 00:08:49.200 fused_ordering(810) 00:08:49.200 fused_ordering(811) 00:08:49.200 fused_ordering(812) 00:08:49.200 fused_ordering(813) 00:08:49.200 fused_ordering(814) 00:08:49.200 fused_ordering(815) 00:08:49.200 fused_ordering(816) 00:08:49.200 fused_ordering(817) 00:08:49.200 fused_ordering(818) 00:08:49.200 fused_ordering(819) 00:08:49.200 fused_ordering(820) 00:08:50.133 fused_ordering(821) 00:08:50.133 fused_ordering(822) 00:08:50.133 fused_ordering(823) 00:08:50.133 fused_ordering(824) 00:08:50.133 fused_ordering(825) 00:08:50.133 fused_ordering(826) 00:08:50.133 fused_ordering(827) 00:08:50.133 fused_ordering(828) 00:08:50.133 fused_ordering(829) 00:08:50.133 fused_ordering(830) 00:08:50.133 fused_ordering(831) 00:08:50.133 fused_ordering(832) 00:08:50.133 fused_ordering(833) 00:08:50.133 fused_ordering(834) 00:08:50.133 fused_ordering(835) 00:08:50.133 fused_ordering(836) 00:08:50.133 fused_ordering(837) 00:08:50.133 fused_ordering(838) 00:08:50.133 fused_ordering(839) 00:08:50.133 fused_ordering(840) 00:08:50.133 fused_ordering(841) 00:08:50.133 fused_ordering(842) 00:08:50.133 fused_ordering(843) 00:08:50.133 fused_ordering(844) 00:08:50.133 fused_ordering(845) 00:08:50.133 fused_ordering(846) 00:08:50.133 fused_ordering(847) 00:08:50.133 fused_ordering(848) 00:08:50.133 fused_ordering(849) 00:08:50.133 fused_ordering(850) 00:08:50.133 fused_ordering(851) 00:08:50.133 fused_ordering(852) 00:08:50.133 fused_ordering(853) 00:08:50.133 fused_ordering(854) 00:08:50.133 fused_ordering(855) 00:08:50.133 fused_ordering(856) 00:08:50.133 fused_ordering(857) 00:08:50.133 fused_ordering(858) 00:08:50.133 fused_ordering(859) 00:08:50.133 fused_ordering(860) 00:08:50.133 fused_ordering(861) 00:08:50.134 fused_ordering(862) 00:08:50.134 fused_ordering(863) 00:08:50.134 fused_ordering(864) 00:08:50.134 fused_ordering(865) 00:08:50.134 fused_ordering(866) 00:08:50.134 fused_ordering(867) 00:08:50.134 fused_ordering(868) 00:08:50.134 fused_ordering(869) 00:08:50.134 fused_ordering(870) 00:08:50.134 fused_ordering(871) 00:08:50.134 fused_ordering(872) 00:08:50.134 fused_ordering(873) 00:08:50.134 fused_ordering(874) 00:08:50.134 fused_ordering(875) 00:08:50.134 fused_ordering(876) 00:08:50.134 fused_ordering(877) 00:08:50.134 fused_ordering(878) 00:08:50.134 fused_ordering(879) 00:08:50.134 fused_ordering(880) 00:08:50.134 fused_ordering(881) 00:08:50.134 fused_ordering(882) 00:08:50.134 fused_ordering(883) 00:08:50.134 fused_ordering(884) 00:08:50.134 fused_ordering(885) 00:08:50.134 fused_ordering(886) 00:08:50.134 fused_ordering(887) 00:08:50.134 fused_ordering(888) 00:08:50.134 fused_ordering(889) 00:08:50.134 fused_ordering(890) 00:08:50.134 fused_ordering(891) 00:08:50.134 fused_ordering(892) 00:08:50.134 fused_ordering(893) 00:08:50.134 fused_ordering(894) 00:08:50.134 fused_ordering(895) 00:08:50.134 fused_ordering(896) 00:08:50.134 fused_ordering(897) 00:08:50.134 fused_ordering(898) 00:08:50.134 fused_ordering(899) 00:08:50.134 fused_ordering(900) 00:08:50.134 fused_ordering(901) 00:08:50.134 fused_ordering(902) 00:08:50.134 fused_ordering(903) 00:08:50.134 fused_ordering(904) 00:08:50.134 fused_ordering(905) 00:08:50.134 fused_ordering(906) 00:08:50.134 fused_ordering(907) 00:08:50.134 fused_ordering(908) 00:08:50.134 fused_ordering(909) 00:08:50.134 fused_ordering(910) 00:08:50.134 fused_ordering(911) 00:08:50.134 fused_ordering(912) 00:08:50.134 fused_ordering(913) 00:08:50.134 fused_ordering(914) 00:08:50.134 fused_ordering(915) 00:08:50.134 fused_ordering(916) 00:08:50.134 fused_ordering(917) 00:08:50.134 fused_ordering(918) 00:08:50.134 fused_ordering(919) 00:08:50.134 fused_ordering(920) 00:08:50.134 fused_ordering(921) 00:08:50.134 fused_ordering(922) 00:08:50.134 fused_ordering(923) 00:08:50.134 fused_ordering(924) 00:08:50.134 fused_ordering(925) 00:08:50.134 fused_ordering(926) 00:08:50.134 fused_ordering(927) 00:08:50.134 fused_ordering(928) 00:08:50.134 fused_ordering(929) 00:08:50.134 fused_ordering(930) 00:08:50.134 fused_ordering(931) 00:08:50.134 fused_ordering(932) 00:08:50.134 fused_ordering(933) 00:08:50.134 fused_ordering(934) 00:08:50.134 fused_ordering(935) 00:08:50.134 fused_ordering(936) 00:08:50.134 fused_ordering(937) 00:08:50.134 fused_ordering(938) 00:08:50.134 fused_ordering(939) 00:08:50.134 fused_ordering(940) 00:08:50.134 fused_ordering(941) 00:08:50.134 fused_ordering(942) 00:08:50.134 fused_ordering(943) 00:08:50.134 fused_ordering(944) 00:08:50.134 fused_ordering(945) 00:08:50.134 fused_ordering(946) 00:08:50.134 fused_ordering(947) 00:08:50.134 fused_ordering(948) 00:08:50.134 fused_ordering(949) 00:08:50.134 fused_ordering(950) 00:08:50.134 fused_ordering(951) 00:08:50.134 fused_ordering(952) 00:08:50.134 fused_ordering(953) 00:08:50.134 fused_ordering(954) 00:08:50.134 fused_ordering(955) 00:08:50.134 fused_ordering(956) 00:08:50.134 fused_ordering(957) 00:08:50.134 fused_ordering(958) 00:08:50.134 fused_ordering(959) 00:08:50.134 fused_ordering(960) 00:08:50.134 fused_ordering(961) 00:08:50.134 fused_ordering(962) 00:08:50.134 fused_ordering(963) 00:08:50.134 fused_ordering(964) 00:08:50.134 fused_ordering(965) 00:08:50.134 fused_ordering(966) 00:08:50.134 fused_ordering(967) 00:08:50.134 fused_ordering(968) 00:08:50.134 fused_ordering(969) 00:08:50.134 fused_ordering(970) 00:08:50.134 fused_ordering(971) 00:08:50.134 fused_ordering(972) 00:08:50.134 fused_ordering(973) 00:08:50.134 fused_ordering(974) 00:08:50.134 fused_ordering(975) 00:08:50.134 fused_ordering(976) 00:08:50.134 fused_ordering(977) 00:08:50.134 fused_ordering(978) 00:08:50.134 fused_ordering(979) 00:08:50.134 fused_ordering(980) 00:08:50.134 fused_ordering(981) 00:08:50.134 fused_ordering(982) 00:08:50.134 fused_ordering(983) 00:08:50.134 fused_ordering(984) 00:08:50.134 fused_ordering(985) 00:08:50.134 fused_ordering(986) 00:08:50.134 fused_ordering(987) 00:08:50.134 fused_ordering(988) 00:08:50.134 fused_ordering(989) 00:08:50.134 fused_ordering(990) 00:08:50.134 fused_ordering(991) 00:08:50.134 fused_ordering(992) 00:08:50.134 fused_ordering(993) 00:08:50.134 fused_ordering(994) 00:08:50.134 fused_ordering(995) 00:08:50.134 fused_ordering(996) 00:08:50.134 fused_ordering(997) 00:08:50.134 fused_ordering(998) 00:08:50.134 fused_ordering(999) 00:08:50.134 fused_ordering(1000) 00:08:50.134 fused_ordering(1001) 00:08:50.134 fused_ordering(1002) 00:08:50.134 fused_ordering(1003) 00:08:50.134 fused_ordering(1004) 00:08:50.134 fused_ordering(1005) 00:08:50.134 fused_ordering(1006) 00:08:50.134 fused_ordering(1007) 00:08:50.134 fused_ordering(1008) 00:08:50.134 fused_ordering(1009) 00:08:50.134 fused_ordering(1010) 00:08:50.134 fused_ordering(1011) 00:08:50.134 fused_ordering(1012) 00:08:50.134 fused_ordering(1013) 00:08:50.134 fused_ordering(1014) 00:08:50.134 fused_ordering(1015) 00:08:50.134 fused_ordering(1016) 00:08:50.134 fused_ordering(1017) 00:08:50.134 fused_ordering(1018) 00:08:50.134 fused_ordering(1019) 00:08:50.134 fused_ordering(1020) 00:08:50.134 fused_ordering(1021) 00:08:50.134 fused_ordering(1022) 00:08:50.134 fused_ordering(1023) 00:08:50.134 14:13:31 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:50.134 14:13:31 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:50.134 14:13:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:50.134 14:13:31 -- nvmf/common.sh@117 -- # sync 00:08:50.134 14:13:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:50.134 14:13:31 -- nvmf/common.sh@120 -- # set +e 00:08:50.134 14:13:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:50.134 14:13:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:50.134 rmmod nvme_tcp 00:08:50.134 rmmod nvme_fabrics 00:08:50.134 rmmod nvme_keyring 00:08:50.134 14:13:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:50.134 14:13:31 -- nvmf/common.sh@124 -- # set -e 00:08:50.134 14:13:31 -- nvmf/common.sh@125 -- # return 0 00:08:50.134 14:13:31 -- nvmf/common.sh@478 -- # '[' -n 3103417 ']' 00:08:50.134 14:13:31 -- nvmf/common.sh@479 -- # killprocess 3103417 00:08:50.134 14:13:31 -- common/autotest_common.sh@936 -- # '[' -z 3103417 ']' 00:08:50.134 14:13:31 -- common/autotest_common.sh@940 -- # kill -0 3103417 00:08:50.134 14:13:31 -- common/autotest_common.sh@941 -- # uname 00:08:50.134 14:13:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:50.134 14:13:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3103417 00:08:50.134 14:13:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:50.134 14:13:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:50.134 14:13:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3103417' 00:08:50.134 killing process with pid 3103417 00:08:50.134 14:13:31 -- common/autotest_common.sh@955 -- # kill 3103417 00:08:50.134 14:13:31 -- common/autotest_common.sh@960 -- # wait 3103417 00:08:50.393 14:13:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:50.393 14:13:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:50.393 14:13:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:50.393 14:13:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.393 14:13:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:50.393 14:13:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.393 14:13:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.393 14:13:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.299 14:13:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:52.299 00:08:52.299 real 0m7.362s 00:08:52.299 user 0m5.026s 00:08:52.299 sys 0m3.181s 00:08:52.299 14:13:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:52.299 14:13:33 -- common/autotest_common.sh@10 -- # set +x 00:08:52.299 ************************************ 00:08:52.299 END TEST nvmf_fused_ordering 00:08:52.299 ************************************ 00:08:52.299 14:13:33 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:52.299 14:13:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:52.299 14:13:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:52.299 14:13:33 -- common/autotest_common.sh@10 -- # set +x 00:08:52.559 ************************************ 00:08:52.559 START TEST nvmf_delete_subsystem 00:08:52.559 ************************************ 00:08:52.559 14:13:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:52.559 * Looking for test storage... 00:08:52.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.559 14:13:33 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.559 14:13:33 -- nvmf/common.sh@7 -- # uname -s 00:08:52.559 14:13:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.559 14:13:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.559 14:13:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.559 14:13:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.559 14:13:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.559 14:13:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.559 14:13:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.559 14:13:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.559 14:13:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.559 14:13:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.559 14:13:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:52.559 14:13:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:52.559 14:13:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.559 14:13:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.559 14:13:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.559 14:13:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.559 14:13:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.559 14:13:34 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.559 14:13:34 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.559 14:13:34 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.559 14:13:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.559 14:13:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.559 14:13:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.559 14:13:34 -- paths/export.sh@5 -- # export PATH 00:08:52.559 14:13:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.559 14:13:34 -- nvmf/common.sh@47 -- # : 0 00:08:52.559 14:13:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:52.559 14:13:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:52.559 14:13:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.559 14:13:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.559 14:13:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.559 14:13:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:52.559 14:13:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:52.559 14:13:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:52.559 14:13:34 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:52.559 14:13:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:52.559 14:13:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.559 14:13:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:52.559 14:13:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:52.559 14:13:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:52.559 14:13:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.559 14:13:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.559 14:13:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.559 14:13:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:52.559 14:13:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:52.559 14:13:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:52.559 14:13:34 -- common/autotest_common.sh@10 -- # set +x 00:08:54.464 14:13:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:54.464 14:13:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:54.464 14:13:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:54.464 14:13:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:54.464 14:13:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:54.464 14:13:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:54.464 14:13:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:54.464 14:13:35 -- nvmf/common.sh@295 -- # net_devs=() 00:08:54.464 14:13:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:54.464 14:13:35 -- nvmf/common.sh@296 -- # e810=() 00:08:54.464 14:13:35 -- nvmf/common.sh@296 -- # local -ga e810 00:08:54.464 14:13:35 -- nvmf/common.sh@297 -- # x722=() 00:08:54.464 14:13:35 -- nvmf/common.sh@297 -- # local -ga x722 00:08:54.464 14:13:35 -- nvmf/common.sh@298 -- # mlx=() 00:08:54.464 14:13:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:54.464 14:13:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.464 14:13:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.464 14:13:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.464 14:13:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.464 14:13:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.464 14:13:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.464 14:13:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.464 14:13:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.464 14:13:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.464 14:13:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.464 14:13:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.464 14:13:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:54.464 14:13:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:54.464 14:13:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:54.464 14:13:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:54.464 14:13:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:54.464 14:13:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:54.464 14:13:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.464 14:13:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:54.464 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:54.464 14:13:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.464 14:13:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.464 14:13:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.464 14:13:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.464 14:13:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.464 14:13:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.464 14:13:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:54.464 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:54.464 14:13:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.464 14:13:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.464 14:13:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.464 14:13:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.464 14:13:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.464 14:13:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:54.464 14:13:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:54.464 14:13:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:54.464 14:13:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.464 14:13:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.464 14:13:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:54.464 14:13:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.464 14:13:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:54.464 Found net devices under 0000:08:00.0: cvl_0_0 00:08:54.464 14:13:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.464 14:13:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.464 14:13:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.464 14:13:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:54.464 14:13:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.464 14:13:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:54.464 Found net devices under 0000:08:00.1: cvl_0_1 00:08:54.464 14:13:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.464 14:13:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:54.464 14:13:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:54.464 14:13:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:54.464 14:13:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:54.464 14:13:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:54.464 14:13:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.464 14:13:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.464 14:13:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.464 14:13:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:54.464 14:13:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.464 14:13:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.464 14:13:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:54.464 14:13:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.464 14:13:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.464 14:13:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:54.464 14:13:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:54.464 14:13:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.464 14:13:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.464 14:13:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.464 14:13:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.464 14:13:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:54.464 14:13:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.464 14:13:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.464 14:13:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.464 14:13:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:54.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:08:54.464 00:08:54.464 --- 10.0.0.2 ping statistics --- 00:08:54.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.464 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:08:54.464 14:13:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:08:54.464 00:08:54.464 --- 10.0.0.1 ping statistics --- 00:08:54.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.464 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:08:54.464 14:13:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.464 14:13:35 -- nvmf/common.sh@411 -- # return 0 00:08:54.464 14:13:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:54.464 14:13:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.464 14:13:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:54.464 14:13:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:54.464 14:13:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.464 14:13:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:54.464 14:13:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:54.464 14:13:35 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:54.464 14:13:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:54.464 14:13:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:54.464 14:13:35 -- common/autotest_common.sh@10 -- # set +x 00:08:54.464 14:13:35 -- nvmf/common.sh@470 -- # nvmfpid=3105161 00:08:54.464 14:13:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:54.464 14:13:35 -- nvmf/common.sh@471 -- # waitforlisten 3105161 00:08:54.464 14:13:35 -- common/autotest_common.sh@817 -- # '[' -z 3105161 ']' 00:08:54.464 14:13:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.464 14:13:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:54.464 14:13:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.464 14:13:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:54.464 14:13:35 -- common/autotest_common.sh@10 -- # set +x 00:08:54.465 [2024-04-26 14:13:35.851936] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:08:54.465 [2024-04-26 14:13:35.852037] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.465 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.465 [2024-04-26 14:13:35.918228] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:54.723 [2024-04-26 14:13:36.036806] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.723 [2024-04-26 14:13:36.036866] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.723 [2024-04-26 14:13:36.036883] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.723 [2024-04-26 14:13:36.036896] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.723 [2024-04-26 14:13:36.036908] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.723 [2024-04-26 14:13:36.037003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.723 [2024-04-26 14:13:36.037029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.723 14:13:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:54.723 14:13:36 -- common/autotest_common.sh@850 -- # return 0 00:08:54.723 14:13:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:54.723 14:13:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:54.723 14:13:36 -- common/autotest_common.sh@10 -- # set +x 00:08:54.723 14:13:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.723 14:13:36 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:54.723 14:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:54.723 14:13:36 -- common/autotest_common.sh@10 -- # set +x 00:08:54.723 [2024-04-26 14:13:36.176911] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.723 14:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:54.723 14:13:36 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:54.723 14:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:54.723 14:13:36 -- common/autotest_common.sh@10 -- # set +x 00:08:54.723 14:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:54.723 14:13:36 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.723 14:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:54.723 14:13:36 -- common/autotest_common.sh@10 -- # set +x 00:08:54.723 [2024-04-26 14:13:36.193081] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.723 14:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:54.723 14:13:36 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:54.723 14:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:54.723 14:13:36 -- common/autotest_common.sh@10 -- # set +x 00:08:54.723 NULL1 00:08:54.723 14:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:54.723 14:13:36 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:54.723 14:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:54.723 14:13:36 -- common/autotest_common.sh@10 -- # set +x 00:08:54.723 Delay0 00:08:54.723 14:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:54.723 14:13:36 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.723 14:13:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:54.723 14:13:36 -- common/autotest_common.sh@10 -- # set +x 00:08:54.723 14:13:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:54.723 14:13:36 -- target/delete_subsystem.sh@28 -- # perf_pid=3105269 00:08:54.723 14:13:36 -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:54.723 14:13:36 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:54.723 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.723 [2024-04-26 14:13:36.277877] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:57.250 14:13:38 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.250 14:13:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:57.250 14:13:38 -- common/autotest_common.sh@10 -- # set +x 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 starting I/O failed: -6 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 Write completed with error (sct=0, sc=8) 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 starting I/O failed: -6 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 Write completed with error (sct=0, sc=8) 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 starting I/O failed: -6 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 starting I/O failed: -6 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 Write completed with error (sct=0, sc=8) 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 starting I/O failed: -6 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 Write completed with error (sct=0, sc=8) 00:08:57.250 Write completed with error (sct=0, sc=8) 00:08:57.250 starting I/O failed: -6 00:08:57.250 Write completed with error (sct=0, sc=8) 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 Write completed with error (sct=0, sc=8) 00:08:57.250 Write completed with error (sct=0, sc=8) 00:08:57.250 starting I/O failed: -6 00:08:57.250 Write completed with error (sct=0, sc=8) 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 starting I/O failed: -6 00:08:57.250 Write completed with error (sct=0, sc=8) 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.250 Write completed with error (sct=0, sc=8) 00:08:57.250 Read completed with error (sct=0, sc=8) 00:08:57.251 starting I/O failed: -6 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 starting I/O failed: -6 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 starting I/O failed: -6 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 starting I/O failed: -6 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 starting I/O failed: -6 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 starting I/O failed: -6 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 [2024-04-26 14:13:38.493113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19410e0 is same with the state(5) to be set 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 starting I/O failed: -6 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 starting I/O failed: -6 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 starting I/O failed: -6 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 starting I/O failed: -6 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 starting I/O failed: -6 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 starting I/O failed: -6 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 starting I/O failed: -6 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 starting I/O failed: -6 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 starting I/O failed: -6 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 [2024-04-26 14:13:38.493711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9cbc00bf90 is same with the state(5) to be set 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Write completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:57.251 Read completed with error (sct=0, sc=8) 00:08:58.185 [2024-04-26 14:13:39.455987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1937d10 is same with the state(5) to be set 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 [2024-04-26 14:13:39.495044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941c30 is same with the state(5) to be set 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 [2024-04-26 14:13:39.495752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x193a0a0 is same with the state(5) to be set 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Write completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.185 Read completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Write completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Write completed with error (sct=0, sc=8) 00:08:58.186 Write completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Write completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Write completed with error (sct=0, sc=8) 00:08:58.186 Write completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Write completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Write completed with error (sct=0, sc=8) 00:08:58.186 Write completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Write completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 [2024-04-26 14:13:39.496028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1941270 is same with the state(5) to be set 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Write completed with error (sct=0, sc=8) 00:08:58.186 Write completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Write completed with error (sct=0, sc=8) 00:08:58.186 Write completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 Read completed with error (sct=0, sc=8) 00:08:58.186 [2024-04-26 14:13:39.496384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9cbc00c250 is same with the state(5) to be set 00:08:58.186 [2024-04-26 14:13:39.497094] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1937d10 (9): Bad file descriptor 00:08:58.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:58.186 14:13:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.186 14:13:39 -- target/delete_subsystem.sh@34 -- # delay=0 00:08:58.186 14:13:39 -- target/delete_subsystem.sh@35 -- # kill -0 3105269 00:08:58.186 14:13:39 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:58.186 Initializing NVMe Controllers 00:08:58.186 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:58.186 Controller IO queue size 128, less than required. 00:08:58.186 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:58.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:58.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:58.186 Initialization complete. Launching workers. 00:08:58.186 ======================================================== 00:08:58.186 Latency(us) 00:08:58.186 Device Information : IOPS MiB/s Average min max 00:08:58.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.18 0.10 945129.20 728.46 1016160.07 00:08:58.186 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 146.14 0.07 916987.10 361.96 1016146.08 00:08:58.186 ======================================================== 00:08:58.186 Total : 341.32 0.17 933079.97 361.96 1016160.07 00:08:58.186 00:08:58.444 14:13:40 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:58.444 14:13:40 -- target/delete_subsystem.sh@35 -- # kill -0 3105269 00:08:58.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3105269) - No such process 00:08:58.444 14:13:40 -- target/delete_subsystem.sh@45 -- # NOT wait 3105269 00:08:58.444 14:13:40 -- common/autotest_common.sh@638 -- # local es=0 00:08:58.444 14:13:40 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 3105269 00:08:58.444 14:13:40 -- common/autotest_common.sh@626 -- # local arg=wait 00:08:58.444 14:13:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:58.444 14:13:40 -- common/autotest_common.sh@630 -- # type -t wait 00:08:58.444 14:13:40 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:58.444 14:13:40 -- common/autotest_common.sh@641 -- # wait 3105269 00:08:58.444 14:13:40 -- common/autotest_common.sh@641 -- # es=1 00:08:58.444 14:13:40 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:58.444 14:13:40 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:58.444 14:13:40 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:58.444 14:13:40 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:58.444 14:13:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.444 14:13:40 -- common/autotest_common.sh@10 -- # set +x 00:08:58.444 14:13:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.444 14:13:40 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.444 14:13:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.444 14:13:40 -- common/autotest_common.sh@10 -- # set +x 00:08:58.702 [2024-04-26 14:13:40.015746] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.702 14:13:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.702 14:13:40 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.702 14:13:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.702 14:13:40 -- common/autotest_common.sh@10 -- # set +x 00:08:58.702 14:13:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.702 14:13:40 -- target/delete_subsystem.sh@54 -- # perf_pid=3105581 00:08:58.702 14:13:40 -- target/delete_subsystem.sh@56 -- # delay=0 00:08:58.702 14:13:40 -- target/delete_subsystem.sh@57 -- # kill -0 3105581 00:08:58.702 14:13:40 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:58.702 14:13:40 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:58.702 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.702 [2024-04-26 14:13:40.082096] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:59.268 14:13:40 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:59.268 14:13:40 -- target/delete_subsystem.sh@57 -- # kill -0 3105581 00:08:59.268 14:13:40 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:59.525 14:13:41 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:59.525 14:13:41 -- target/delete_subsystem.sh@57 -- # kill -0 3105581 00:08:59.525 14:13:41 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:00.091 14:13:41 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:00.091 14:13:41 -- target/delete_subsystem.sh@57 -- # kill -0 3105581 00:09:00.091 14:13:41 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:00.656 14:13:42 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:00.656 14:13:42 -- target/delete_subsystem.sh@57 -- # kill -0 3105581 00:09:00.656 14:13:42 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:01.222 14:13:42 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:01.222 14:13:42 -- target/delete_subsystem.sh@57 -- # kill -0 3105581 00:09:01.222 14:13:42 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:01.480 14:13:43 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:01.480 14:13:43 -- target/delete_subsystem.sh@57 -- # kill -0 3105581 00:09:01.480 14:13:43 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:01.738 Initializing NVMe Controllers 00:09:01.738 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:01.738 Controller IO queue size 128, less than required. 00:09:01.738 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:01.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:01.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:01.738 Initialization complete. Launching workers. 00:09:01.738 ======================================================== 00:09:01.738 Latency(us) 00:09:01.738 Device Information : IOPS MiB/s Average min max 00:09:01.738 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004225.20 1000262.54 1040651.08 00:09:01.738 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005599.83 1000183.89 1042785.23 00:09:01.738 ======================================================== 00:09:01.738 Total : 256.00 0.12 1004912.52 1000183.89 1042785.23 00:09:01.738 00:09:01.995 14:13:43 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:01.995 14:13:43 -- target/delete_subsystem.sh@57 -- # kill -0 3105581 00:09:01.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3105581) - No such process 00:09:01.995 14:13:43 -- target/delete_subsystem.sh@67 -- # wait 3105581 00:09:01.995 14:13:43 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:01.995 14:13:43 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:01.995 14:13:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:01.995 14:13:43 -- nvmf/common.sh@117 -- # sync 00:09:01.995 14:13:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:01.995 14:13:43 -- nvmf/common.sh@120 -- # set +e 00:09:01.995 14:13:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:01.995 14:13:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:01.995 rmmod nvme_tcp 00:09:02.254 rmmod nvme_fabrics 00:09:02.254 rmmod nvme_keyring 00:09:02.254 14:13:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:02.254 14:13:43 -- nvmf/common.sh@124 -- # set -e 00:09:02.254 14:13:43 -- nvmf/common.sh@125 -- # return 0 00:09:02.254 14:13:43 -- nvmf/common.sh@478 -- # '[' -n 3105161 ']' 00:09:02.254 14:13:43 -- nvmf/common.sh@479 -- # killprocess 3105161 00:09:02.254 14:13:43 -- common/autotest_common.sh@936 -- # '[' -z 3105161 ']' 00:09:02.254 14:13:43 -- common/autotest_common.sh@940 -- # kill -0 3105161 00:09:02.254 14:13:43 -- common/autotest_common.sh@941 -- # uname 00:09:02.254 14:13:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:02.254 14:13:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3105161 00:09:02.254 14:13:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:02.254 14:13:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:02.254 14:13:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3105161' 00:09:02.254 killing process with pid 3105161 00:09:02.254 14:13:43 -- common/autotest_common.sh@955 -- # kill 3105161 00:09:02.254 14:13:43 -- common/autotest_common.sh@960 -- # wait 3105161 00:09:02.512 14:13:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:02.512 14:13:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:02.512 14:13:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:02.512 14:13:43 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:02.512 14:13:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:02.512 14:13:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.512 14:13:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.512 14:13:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.417 14:13:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:04.417 00:09:04.417 real 0m11.956s 00:09:04.417 user 0m27.783s 00:09:04.417 sys 0m2.717s 00:09:04.417 14:13:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:04.417 14:13:45 -- common/autotest_common.sh@10 -- # set +x 00:09:04.417 ************************************ 00:09:04.417 END TEST nvmf_delete_subsystem 00:09:04.417 ************************************ 00:09:04.417 14:13:45 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:04.417 14:13:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:04.417 14:13:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:04.417 14:13:45 -- common/autotest_common.sh@10 -- # set +x 00:09:04.676 ************************************ 00:09:04.676 START TEST nvmf_ns_masking 00:09:04.676 ************************************ 00:09:04.676 14:13:46 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:04.676 * Looking for test storage... 00:09:04.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:04.676 14:13:46 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:04.676 14:13:46 -- nvmf/common.sh@7 -- # uname -s 00:09:04.676 14:13:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.676 14:13:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.676 14:13:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.676 14:13:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.676 14:13:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.676 14:13:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.676 14:13:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.676 14:13:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.676 14:13:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.676 14:13:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.676 14:13:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:04.676 14:13:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:04.676 14:13:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.676 14:13:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.676 14:13:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:04.676 14:13:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.676 14:13:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:04.676 14:13:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.676 14:13:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.676 14:13:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.676 14:13:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.676 14:13:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.676 14:13:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.676 14:13:46 -- paths/export.sh@5 -- # export PATH 00:09:04.676 14:13:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.676 14:13:46 -- nvmf/common.sh@47 -- # : 0 00:09:04.676 14:13:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:04.676 14:13:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:04.676 14:13:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.676 14:13:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.676 14:13:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.676 14:13:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:04.676 14:13:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:04.676 14:13:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:04.676 14:13:46 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:04.676 14:13:46 -- target/ns_masking.sh@11 -- # loops=5 00:09:04.676 14:13:46 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:04.676 14:13:46 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:09:04.676 14:13:46 -- target/ns_masking.sh@15 -- # uuidgen 00:09:04.676 14:13:46 -- target/ns_masking.sh@15 -- # HOSTID=2b11ed54-88cb-477f-be20-abac43adf55f 00:09:04.676 14:13:46 -- target/ns_masking.sh@44 -- # nvmftestinit 00:09:04.676 14:13:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:04.676 14:13:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.676 14:13:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:04.676 14:13:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:04.676 14:13:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:04.676 14:13:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.676 14:13:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.676 14:13:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.676 14:13:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:04.676 14:13:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:04.676 14:13:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:04.676 14:13:46 -- common/autotest_common.sh@10 -- # set +x 00:09:06.584 14:13:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:06.584 14:13:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:06.584 14:13:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:06.584 14:13:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:06.584 14:13:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:06.584 14:13:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:06.584 14:13:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:06.584 14:13:47 -- nvmf/common.sh@295 -- # net_devs=() 00:09:06.584 14:13:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:06.584 14:13:47 -- nvmf/common.sh@296 -- # e810=() 00:09:06.584 14:13:47 -- nvmf/common.sh@296 -- # local -ga e810 00:09:06.584 14:13:47 -- nvmf/common.sh@297 -- # x722=() 00:09:06.584 14:13:47 -- nvmf/common.sh@297 -- # local -ga x722 00:09:06.584 14:13:47 -- nvmf/common.sh@298 -- # mlx=() 00:09:06.584 14:13:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:06.584 14:13:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.584 14:13:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.584 14:13:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.584 14:13:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.584 14:13:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.584 14:13:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.584 14:13:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.584 14:13:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.584 14:13:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.584 14:13:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.584 14:13:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.584 14:13:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:06.584 14:13:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:06.584 14:13:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:06.584 14:13:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:06.584 14:13:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:06.584 14:13:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:06.584 14:13:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.584 14:13:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:06.584 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:06.584 14:13:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.584 14:13:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.584 14:13:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.584 14:13:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.584 14:13:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.584 14:13:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.584 14:13:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:06.584 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:06.584 14:13:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.584 14:13:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.584 14:13:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.584 14:13:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.584 14:13:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.584 14:13:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:06.584 14:13:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:06.584 14:13:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:06.584 14:13:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.584 14:13:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.584 14:13:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:06.585 14:13:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.585 14:13:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:06.585 Found net devices under 0000:08:00.0: cvl_0_0 00:09:06.585 14:13:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.585 14:13:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.585 14:13:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.585 14:13:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:06.585 14:13:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.585 14:13:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:06.585 Found net devices under 0000:08:00.1: cvl_0_1 00:09:06.585 14:13:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.585 14:13:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:06.585 14:13:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:06.585 14:13:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:06.585 14:13:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:06.585 14:13:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:06.585 14:13:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.585 14:13:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.585 14:13:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.585 14:13:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:06.585 14:13:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.585 14:13:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.585 14:13:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:06.585 14:13:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.585 14:13:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.585 14:13:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:06.585 14:13:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:06.585 14:13:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.585 14:13:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.585 14:13:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.585 14:13:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.585 14:13:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:06.585 14:13:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.585 14:13:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.585 14:13:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.585 14:13:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:06.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:09:06.585 00:09:06.585 --- 10.0.0.2 ping statistics --- 00:09:06.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.585 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:09:06.585 14:13:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:09:06.585 00:09:06.585 --- 10.0.0.1 ping statistics --- 00:09:06.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.585 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:09:06.585 14:13:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.585 14:13:47 -- nvmf/common.sh@411 -- # return 0 00:09:06.585 14:13:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:06.585 14:13:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.585 14:13:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:06.585 14:13:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:06.585 14:13:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.585 14:13:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:06.585 14:13:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:06.585 14:13:47 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:09:06.585 14:13:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:06.585 14:13:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:06.585 14:13:47 -- common/autotest_common.sh@10 -- # set +x 00:09:06.585 14:13:47 -- nvmf/common.sh@470 -- # nvmfpid=3107401 00:09:06.585 14:13:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:06.585 14:13:47 -- nvmf/common.sh@471 -- # waitforlisten 3107401 00:09:06.585 14:13:47 -- common/autotest_common.sh@817 -- # '[' -z 3107401 ']' 00:09:06.585 14:13:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.585 14:13:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:06.585 14:13:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.585 14:13:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:06.585 14:13:47 -- common/autotest_common.sh@10 -- # set +x 00:09:06.585 [2024-04-26 14:13:47.923678] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:09:06.585 [2024-04-26 14:13:47.923778] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.585 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.585 [2024-04-26 14:13:47.994473] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.585 [2024-04-26 14:13:48.114996] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.585 [2024-04-26 14:13:48.115059] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.585 [2024-04-26 14:13:48.115075] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.585 [2024-04-26 14:13:48.115088] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.585 [2024-04-26 14:13:48.115100] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.585 [2024-04-26 14:13:48.115181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.585 [2024-04-26 14:13:48.115207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.585 [2024-04-26 14:13:48.115265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.585 [2024-04-26 14:13:48.115269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.844 14:13:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:06.844 14:13:48 -- common/autotest_common.sh@850 -- # return 0 00:09:06.844 14:13:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:06.844 14:13:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:06.844 14:13:48 -- common/autotest_common.sh@10 -- # set +x 00:09:06.844 14:13:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.844 14:13:48 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:07.102 [2024-04-26 14:13:48.534189] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.102 14:13:48 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:09:07.102 14:13:48 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:09:07.102 14:13:48 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:07.361 Malloc1 00:09:07.361 14:13:48 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:07.619 Malloc2 00:09:07.619 14:13:49 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:08.183 14:13:49 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:08.442 14:13:49 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.699 [2024-04-26 14:13:50.045154] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.699 14:13:50 -- target/ns_masking.sh@61 -- # connect 00:09:08.699 14:13:50 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2b11ed54-88cb-477f-be20-abac43adf55f -a 10.0.0.2 -s 4420 -i 4 00:09:08.699 14:13:50 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:09:08.699 14:13:50 -- common/autotest_common.sh@1184 -- # local i=0 00:09:08.699 14:13:50 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:08.699 14:13:50 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:08.699 14:13:50 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:11.228 14:13:52 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:11.228 14:13:52 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:11.228 14:13:52 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:11.228 14:13:52 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:11.228 14:13:52 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:11.228 14:13:52 -- common/autotest_common.sh@1194 -- # return 0 00:09:11.228 14:13:52 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:11.228 14:13:52 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:11.228 14:13:52 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:11.228 14:13:52 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:11.228 14:13:52 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:09:11.228 14:13:52 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:11.228 14:13:52 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:11.228 [ 0]:0x1 00:09:11.228 14:13:52 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:11.228 14:13:52 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:11.228 14:13:52 -- target/ns_masking.sh@40 -- # nguid=28e7940f9a9c4f9a94082bc680b39b42 00:09:11.228 14:13:52 -- target/ns_masking.sh@41 -- # [[ 28e7940f9a9c4f9a94082bc680b39b42 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:11.228 14:13:52 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:11.228 14:13:52 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:09:11.228 14:13:52 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:11.228 14:13:52 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:11.228 [ 0]:0x1 00:09:11.228 14:13:52 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:11.228 14:13:52 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:11.228 14:13:52 -- target/ns_masking.sh@40 -- # nguid=28e7940f9a9c4f9a94082bc680b39b42 00:09:11.228 14:13:52 -- target/ns_masking.sh@41 -- # [[ 28e7940f9a9c4f9a94082bc680b39b42 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:11.228 14:13:52 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:09:11.228 14:13:52 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:11.228 14:13:52 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:11.228 [ 1]:0x2 00:09:11.228 14:13:52 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:11.228 14:13:52 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:11.228 14:13:52 -- target/ns_masking.sh@40 -- # nguid=5ea8ca5780734bebb1e434ffbb0f365d 00:09:11.228 14:13:52 -- target/ns_masking.sh@41 -- # [[ 5ea8ca5780734bebb1e434ffbb0f365d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:11.228 14:13:52 -- target/ns_masking.sh@69 -- # disconnect 00:09:11.228 14:13:52 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.228 14:13:52 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.794 14:13:53 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:12.052 14:13:53 -- target/ns_masking.sh@77 -- # connect 1 00:09:12.052 14:13:53 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2b11ed54-88cb-477f-be20-abac43adf55f -a 10.0.0.2 -s 4420 -i 4 00:09:12.052 14:13:53 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:12.052 14:13:53 -- common/autotest_common.sh@1184 -- # local i=0 00:09:12.052 14:13:53 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:12.052 14:13:53 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:09:12.052 14:13:53 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:09:12.052 14:13:53 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:13.949 14:13:55 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:13.949 14:13:55 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:13.949 14:13:55 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.949 14:13:55 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:13.949 14:13:55 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.949 14:13:55 -- common/autotest_common.sh@1194 -- # return 0 00:09:13.949 14:13:55 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:13.949 14:13:55 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:14.206 14:13:55 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:14.206 14:13:55 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:14.206 14:13:55 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:09:14.206 14:13:55 -- common/autotest_common.sh@638 -- # local es=0 00:09:14.206 14:13:55 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:14.206 14:13:55 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:14.206 14:13:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:14.206 14:13:55 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:14.206 14:13:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:14.206 14:13:55 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:14.206 14:13:55 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:14.206 14:13:55 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:14.206 14:13:55 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:14.207 14:13:55 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:14.207 14:13:55 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:14.207 14:13:55 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.207 14:13:55 -- common/autotest_common.sh@641 -- # es=1 00:09:14.207 14:13:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:14.207 14:13:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:14.207 14:13:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:14.207 14:13:55 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:09:14.207 14:13:55 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:14.207 14:13:55 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:14.207 [ 0]:0x2 00:09:14.207 14:13:55 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:14.207 14:13:55 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:14.207 14:13:55 -- target/ns_masking.sh@40 -- # nguid=5ea8ca5780734bebb1e434ffbb0f365d 00:09:14.207 14:13:55 -- target/ns_masking.sh@41 -- # [[ 5ea8ca5780734bebb1e434ffbb0f365d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.207 14:13:55 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:14.464 14:13:55 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:09:14.464 14:13:55 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:14.464 14:13:55 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:14.464 [ 0]:0x1 00:09:14.464 14:13:55 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:14.464 14:13:55 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:14.464 14:13:55 -- target/ns_masking.sh@40 -- # nguid=28e7940f9a9c4f9a94082bc680b39b42 00:09:14.465 14:13:55 -- target/ns_masking.sh@41 -- # [[ 28e7940f9a9c4f9a94082bc680b39b42 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.465 14:13:55 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:09:14.465 14:13:55 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:14.465 14:13:55 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:14.465 [ 1]:0x2 00:09:14.465 14:13:56 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:14.465 14:13:56 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:14.722 14:13:56 -- target/ns_masking.sh@40 -- # nguid=5ea8ca5780734bebb1e434ffbb0f365d 00:09:14.722 14:13:56 -- target/ns_masking.sh@41 -- # [[ 5ea8ca5780734bebb1e434ffbb0f365d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.722 14:13:56 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:14.980 14:13:56 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:09:14.980 14:13:56 -- common/autotest_common.sh@638 -- # local es=0 00:09:14.980 14:13:56 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:14.980 14:13:56 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:14.980 14:13:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:14.980 14:13:56 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:14.980 14:13:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:14.980 14:13:56 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:14.980 14:13:56 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:14.980 14:13:56 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:14.980 14:13:56 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:14.980 14:13:56 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:14.980 14:13:56 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:14.980 14:13:56 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.980 14:13:56 -- common/autotest_common.sh@641 -- # es=1 00:09:14.980 14:13:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:14.980 14:13:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:14.980 14:13:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:14.980 14:13:56 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:09:14.980 14:13:56 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:14.980 14:13:56 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:14.980 [ 0]:0x2 00:09:14.980 14:13:56 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:14.980 14:13:56 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:14.980 14:13:56 -- target/ns_masking.sh@40 -- # nguid=5ea8ca5780734bebb1e434ffbb0f365d 00:09:14.980 14:13:56 -- target/ns_masking.sh@41 -- # [[ 5ea8ca5780734bebb1e434ffbb0f365d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.980 14:13:56 -- target/ns_masking.sh@91 -- # disconnect 00:09:14.980 14:13:56 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:14.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.980 14:13:56 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:15.238 14:13:56 -- target/ns_masking.sh@95 -- # connect 2 00:09:15.238 14:13:56 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 2b11ed54-88cb-477f-be20-abac43adf55f -a 10.0.0.2 -s 4420 -i 4 00:09:15.496 14:13:56 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:15.496 14:13:56 -- common/autotest_common.sh@1184 -- # local i=0 00:09:15.496 14:13:56 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:15.496 14:13:56 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:09:15.496 14:13:56 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:09:15.496 14:13:56 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:17.394 14:13:58 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:17.394 14:13:58 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:17.394 14:13:58 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:17.394 14:13:58 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:09:17.394 14:13:58 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:17.394 14:13:58 -- common/autotest_common.sh@1194 -- # return 0 00:09:17.394 14:13:58 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:17.394 14:13:58 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:17.394 14:13:58 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:17.394 14:13:58 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:17.394 14:13:58 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:09:17.394 14:13:58 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:17.394 14:13:58 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:17.394 [ 0]:0x1 00:09:17.394 14:13:58 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:17.394 14:13:58 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:17.394 14:13:58 -- target/ns_masking.sh@40 -- # nguid=28e7940f9a9c4f9a94082bc680b39b42 00:09:17.394 14:13:58 -- target/ns_masking.sh@41 -- # [[ 28e7940f9a9c4f9a94082bc680b39b42 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:17.394 14:13:58 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:09:17.394 14:13:58 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:17.394 14:13:58 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:17.394 [ 1]:0x2 00:09:17.394 14:13:58 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:17.394 14:13:58 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:17.652 14:13:58 -- target/ns_masking.sh@40 -- # nguid=5ea8ca5780734bebb1e434ffbb0f365d 00:09:17.652 14:13:58 -- target/ns_masking.sh@41 -- # [[ 5ea8ca5780734bebb1e434ffbb0f365d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:17.652 14:13:58 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:17.910 14:13:59 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:09:17.910 14:13:59 -- common/autotest_common.sh@638 -- # local es=0 00:09:17.910 14:13:59 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:17.910 14:13:59 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:17.910 14:13:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:17.910 14:13:59 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:17.910 14:13:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:17.910 14:13:59 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:17.910 14:13:59 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:17.910 14:13:59 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:17.910 14:13:59 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:17.910 14:13:59 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:17.910 14:13:59 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:17.910 14:13:59 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:17.910 14:13:59 -- common/autotest_common.sh@641 -- # es=1 00:09:17.910 14:13:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:17.910 14:13:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:17.910 14:13:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:17.910 14:13:59 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:09:17.910 14:13:59 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:17.910 14:13:59 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:17.910 [ 0]:0x2 00:09:17.910 14:13:59 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:17.910 14:13:59 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:17.910 14:13:59 -- target/ns_masking.sh@40 -- # nguid=5ea8ca5780734bebb1e434ffbb0f365d 00:09:17.910 14:13:59 -- target/ns_masking.sh@41 -- # [[ 5ea8ca5780734bebb1e434ffbb0f365d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:17.910 14:13:59 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:17.910 14:13:59 -- common/autotest_common.sh@638 -- # local es=0 00:09:17.910 14:13:59 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:17.910 14:13:59 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.910 14:13:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:17.910 14:13:59 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.910 14:13:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:17.910 14:13:59 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.910 14:13:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:17.910 14:13:59 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:17.910 14:13:59 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:17.910 14:13:59 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:18.168 [2024-04-26 14:13:59.659988] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:18.168 request: 00:09:18.168 { 00:09:18.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:18.168 "nsid": 2, 00:09:18.168 "host": "nqn.2016-06.io.spdk:host1", 00:09:18.168 "method": "nvmf_ns_remove_host", 00:09:18.168 "req_id": 1 00:09:18.168 } 00:09:18.168 Got JSON-RPC error response 00:09:18.168 response: 00:09:18.168 { 00:09:18.168 "code": -32602, 00:09:18.168 "message": "Invalid parameters" 00:09:18.168 } 00:09:18.168 14:13:59 -- common/autotest_common.sh@641 -- # es=1 00:09:18.168 14:13:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:18.168 14:13:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:18.168 14:13:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:18.168 14:13:59 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:09:18.168 14:13:59 -- common/autotest_common.sh@638 -- # local es=0 00:09:18.168 14:13:59 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:09:18.168 14:13:59 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:09:18.168 14:13:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:18.168 14:13:59 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:09:18.168 14:13:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:18.168 14:13:59 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:09:18.168 14:13:59 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:18.168 14:13:59 -- target/ns_masking.sh@39 -- # grep 0x1 00:09:18.168 14:13:59 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:18.168 14:13:59 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:18.168 14:13:59 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:18.168 14:13:59 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:18.168 14:13:59 -- common/autotest_common.sh@641 -- # es=1 00:09:18.168 14:13:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:18.168 14:13:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:18.168 14:13:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:18.168 14:13:59 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:09:18.168 14:13:59 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:18.168 14:13:59 -- target/ns_masking.sh@39 -- # grep 0x2 00:09:18.168 [ 0]:0x2 00:09:18.427 14:13:59 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:18.427 14:13:59 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:18.427 14:13:59 -- target/ns_masking.sh@40 -- # nguid=5ea8ca5780734bebb1e434ffbb0f365d 00:09:18.427 14:13:59 -- target/ns_masking.sh@41 -- # [[ 5ea8ca5780734bebb1e434ffbb0f365d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:18.427 14:13:59 -- target/ns_masking.sh@108 -- # disconnect 00:09:18.427 14:13:59 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:18.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.427 14:13:59 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.685 14:14:00 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:09:18.685 14:14:00 -- target/ns_masking.sh@114 -- # nvmftestfini 00:09:18.685 14:14:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:18.685 14:14:00 -- nvmf/common.sh@117 -- # sync 00:09:18.685 14:14:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.685 14:14:00 -- nvmf/common.sh@120 -- # set +e 00:09:18.685 14:14:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.685 14:14:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.685 rmmod nvme_tcp 00:09:18.685 rmmod nvme_fabrics 00:09:18.685 rmmod nvme_keyring 00:09:18.685 14:14:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.685 14:14:00 -- nvmf/common.sh@124 -- # set -e 00:09:18.685 14:14:00 -- nvmf/common.sh@125 -- # return 0 00:09:18.685 14:14:00 -- nvmf/common.sh@478 -- # '[' -n 3107401 ']' 00:09:18.685 14:14:00 -- nvmf/common.sh@479 -- # killprocess 3107401 00:09:18.685 14:14:00 -- common/autotest_common.sh@936 -- # '[' -z 3107401 ']' 00:09:18.685 14:14:00 -- common/autotest_common.sh@940 -- # kill -0 3107401 00:09:18.685 14:14:00 -- common/autotest_common.sh@941 -- # uname 00:09:18.685 14:14:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:18.685 14:14:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3107401 00:09:18.685 14:14:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:18.685 14:14:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:18.685 14:14:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3107401' 00:09:18.685 killing process with pid 3107401 00:09:18.685 14:14:00 -- common/autotest_common.sh@955 -- # kill 3107401 00:09:18.685 14:14:00 -- common/autotest_common.sh@960 -- # wait 3107401 00:09:18.944 14:14:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:18.944 14:14:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:18.944 14:14:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:18.944 14:14:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.944 14:14:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:18.944 14:14:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.944 14:14:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.944 14:14:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.482 14:14:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:21.482 00:09:21.482 real 0m16.426s 00:09:21.482 user 0m52.979s 00:09:21.482 sys 0m3.300s 00:09:21.482 14:14:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:21.482 14:14:02 -- common/autotest_common.sh@10 -- # set +x 00:09:21.482 ************************************ 00:09:21.482 END TEST nvmf_ns_masking 00:09:21.482 ************************************ 00:09:21.482 14:14:02 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:21.482 14:14:02 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:21.482 14:14:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:21.482 14:14:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.482 14:14:02 -- common/autotest_common.sh@10 -- # set +x 00:09:21.482 ************************************ 00:09:21.482 START TEST nvmf_nvme_cli 00:09:21.482 ************************************ 00:09:21.482 14:14:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:21.482 * Looking for test storage... 00:09:21.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.482 14:14:02 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.482 14:14:02 -- nvmf/common.sh@7 -- # uname -s 00:09:21.482 14:14:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.482 14:14:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.482 14:14:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.482 14:14:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.482 14:14:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.482 14:14:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.482 14:14:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.482 14:14:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.482 14:14:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.482 14:14:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.482 14:14:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:21.482 14:14:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:21.482 14:14:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.482 14:14:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.482 14:14:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.482 14:14:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.482 14:14:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.482 14:14:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.482 14:14:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.482 14:14:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.482 14:14:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.482 14:14:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.482 14:14:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.482 14:14:02 -- paths/export.sh@5 -- # export PATH 00:09:21.482 14:14:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.482 14:14:02 -- nvmf/common.sh@47 -- # : 0 00:09:21.482 14:14:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:21.482 14:14:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:21.482 14:14:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.482 14:14:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.482 14:14:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.482 14:14:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:21.482 14:14:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:21.482 14:14:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:21.482 14:14:02 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:21.482 14:14:02 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:21.482 14:14:02 -- target/nvme_cli.sh@14 -- # devs=() 00:09:21.482 14:14:02 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:21.482 14:14:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:21.482 14:14:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.482 14:14:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:21.482 14:14:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:21.482 14:14:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:21.482 14:14:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.482 14:14:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:21.482 14:14:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.482 14:14:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:21.482 14:14:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:21.482 14:14:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:21.482 14:14:02 -- common/autotest_common.sh@10 -- # set +x 00:09:22.868 14:14:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:22.868 14:14:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:22.868 14:14:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:22.868 14:14:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:22.868 14:14:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:22.868 14:14:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:22.868 14:14:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:22.868 14:14:04 -- nvmf/common.sh@295 -- # net_devs=() 00:09:22.868 14:14:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:22.868 14:14:04 -- nvmf/common.sh@296 -- # e810=() 00:09:22.868 14:14:04 -- nvmf/common.sh@296 -- # local -ga e810 00:09:22.868 14:14:04 -- nvmf/common.sh@297 -- # x722=() 00:09:22.868 14:14:04 -- nvmf/common.sh@297 -- # local -ga x722 00:09:22.868 14:14:04 -- nvmf/common.sh@298 -- # mlx=() 00:09:22.868 14:14:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:22.868 14:14:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.868 14:14:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.868 14:14:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.868 14:14:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.868 14:14:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.868 14:14:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.868 14:14:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.868 14:14:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.868 14:14:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.868 14:14:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.868 14:14:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.868 14:14:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:22.868 14:14:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:22.868 14:14:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:22.868 14:14:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:22.868 14:14:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:22.868 14:14:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:22.868 14:14:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:22.868 14:14:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:22.868 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:22.868 14:14:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:22.868 14:14:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:22.868 14:14:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.868 14:14:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.868 14:14:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:22.868 14:14:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:22.868 14:14:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:22.868 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:22.868 14:14:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:22.868 14:14:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:22.868 14:14:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.868 14:14:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.868 14:14:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:22.868 14:14:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:22.868 14:14:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:22.868 14:14:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:22.868 14:14:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:22.868 14:14:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.868 14:14:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:22.868 14:14:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.868 14:14:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:22.868 Found net devices under 0000:08:00.0: cvl_0_0 00:09:22.868 14:14:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.868 14:14:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:22.868 14:14:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.868 14:14:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:22.868 14:14:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.868 14:14:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:22.868 Found net devices under 0000:08:00.1: cvl_0_1 00:09:22.868 14:14:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.868 14:14:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:22.868 14:14:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:22.868 14:14:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:22.868 14:14:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:22.868 14:14:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:22.868 14:14:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.868 14:14:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.868 14:14:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:22.868 14:14:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:22.868 14:14:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:22.868 14:14:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:22.868 14:14:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:22.868 14:14:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:22.868 14:14:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.868 14:14:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:22.868 14:14:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:22.868 14:14:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:22.868 14:14:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:22.868 14:14:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:22.868 14:14:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:22.868 14:14:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:22.868 14:14:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:22.868 14:14:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:22.868 14:14:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:22.868 14:14:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:22.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:09:22.868 00:09:22.868 --- 10.0.0.2 ping statistics --- 00:09:22.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.868 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:09:22.868 14:14:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:22.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:09:22.868 00:09:22.868 --- 10.0.0.1 ping statistics --- 00:09:22.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.868 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:09:22.868 14:14:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.868 14:14:04 -- nvmf/common.sh@411 -- # return 0 00:09:22.868 14:14:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:22.868 14:14:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.868 14:14:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:22.868 14:14:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:22.868 14:14:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.868 14:14:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:22.868 14:14:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:22.868 14:14:04 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:22.868 14:14:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:22.868 14:14:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:22.868 14:14:04 -- common/autotest_common.sh@10 -- # set +x 00:09:22.868 14:14:04 -- nvmf/common.sh@470 -- # nvmfpid=3110176 00:09:22.868 14:14:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:22.868 14:14:04 -- nvmf/common.sh@471 -- # waitforlisten 3110176 00:09:22.868 14:14:04 -- common/autotest_common.sh@817 -- # '[' -z 3110176 ']' 00:09:22.868 14:14:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.868 14:14:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:22.868 14:14:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.868 14:14:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:22.868 14:14:04 -- common/autotest_common.sh@10 -- # set +x 00:09:23.126 [2024-04-26 14:14:04.462196] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:09:23.126 [2024-04-26 14:14:04.462293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.126 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.126 [2024-04-26 14:14:04.529098] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:23.126 [2024-04-26 14:14:04.647025] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.126 [2024-04-26 14:14:04.647085] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.126 [2024-04-26 14:14:04.647101] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.126 [2024-04-26 14:14:04.647115] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.126 [2024-04-26 14:14:04.647127] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.126 [2024-04-26 14:14:04.647207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.126 [2024-04-26 14:14:04.647273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.126 [2024-04-26 14:14:04.648652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:23.126 [2024-04-26 14:14:04.648704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.406 14:14:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:23.406 14:14:04 -- common/autotest_common.sh@850 -- # return 0 00:09:23.406 14:14:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:23.406 14:14:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:23.406 14:14:04 -- common/autotest_common.sh@10 -- # set +x 00:09:23.406 14:14:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.406 14:14:04 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:23.406 14:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.406 14:14:04 -- common/autotest_common.sh@10 -- # set +x 00:09:23.406 [2024-04-26 14:14:04.797319] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.406 14:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.406 14:14:04 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:23.406 14:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.406 14:14:04 -- common/autotest_common.sh@10 -- # set +x 00:09:23.406 Malloc0 00:09:23.406 14:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.406 14:14:04 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:23.406 14:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.406 14:14:04 -- common/autotest_common.sh@10 -- # set +x 00:09:23.406 Malloc1 00:09:23.406 14:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.406 14:14:04 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:23.406 14:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.406 14:14:04 -- common/autotest_common.sh@10 -- # set +x 00:09:23.406 14:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.406 14:14:04 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:23.406 14:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.406 14:14:04 -- common/autotest_common.sh@10 -- # set +x 00:09:23.406 14:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.406 14:14:04 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:23.406 14:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.406 14:14:04 -- common/autotest_common.sh@10 -- # set +x 00:09:23.406 14:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.406 14:14:04 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.406 14:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.406 14:14:04 -- common/autotest_common.sh@10 -- # set +x 00:09:23.406 [2024-04-26 14:14:04.876418] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.406 14:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.406 14:14:04 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:23.406 14:14:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.406 14:14:04 -- common/autotest_common.sh@10 -- # set +x 00:09:23.406 14:14:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.406 14:14:04 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 4420 00:09:23.662 00:09:23.662 Discovery Log Number of Records 2, Generation counter 2 00:09:23.662 =====Discovery Log Entry 0====== 00:09:23.662 trtype: tcp 00:09:23.662 adrfam: ipv4 00:09:23.662 subtype: current discovery subsystem 00:09:23.662 treq: not required 00:09:23.662 portid: 0 00:09:23.662 trsvcid: 4420 00:09:23.662 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:23.662 traddr: 10.0.0.2 00:09:23.662 eflags: explicit discovery connections, duplicate discovery information 00:09:23.662 sectype: none 00:09:23.662 =====Discovery Log Entry 1====== 00:09:23.662 trtype: tcp 00:09:23.662 adrfam: ipv4 00:09:23.662 subtype: nvme subsystem 00:09:23.662 treq: not required 00:09:23.662 portid: 0 00:09:23.662 trsvcid: 4420 00:09:23.662 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:23.662 traddr: 10.0.0.2 00:09:23.662 eflags: none 00:09:23.662 sectype: none 00:09:23.662 14:14:04 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:23.662 14:14:04 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:23.662 14:14:04 -- nvmf/common.sh@511 -- # local dev _ 00:09:23.662 14:14:04 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:23.662 14:14:04 -- nvmf/common.sh@510 -- # nvme list 00:09:23.662 14:14:04 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:23.662 14:14:04 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:23.662 14:14:04 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:23.662 14:14:04 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:23.662 14:14:04 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:23.662 14:14:04 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:23.935 14:14:05 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:23.935 14:14:05 -- common/autotest_common.sh@1184 -- # local i=0 00:09:23.935 14:14:05 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:23.935 14:14:05 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:09:23.935 14:14:05 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:09:23.935 14:14:05 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:25.902 14:14:07 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:25.902 14:14:07 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:25.902 14:14:07 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.902 14:14:07 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:09:25.902 14:14:07 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.902 14:14:07 -- common/autotest_common.sh@1194 -- # return 0 00:09:25.902 14:14:07 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:25.902 14:14:07 -- nvmf/common.sh@511 -- # local dev _ 00:09:25.902 14:14:07 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:25.902 14:14:07 -- nvmf/common.sh@510 -- # nvme list 00:09:26.160 14:14:07 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:26.160 14:14:07 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:26.160 14:14:07 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:26.160 14:14:07 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:26.160 14:14:07 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:26.160 14:14:07 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:09:26.160 14:14:07 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:26.160 14:14:07 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:26.160 14:14:07 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:09:26.160 14:14:07 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:26.160 14:14:07 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:26.160 /dev/nvme0n1 ]] 00:09:26.160 14:14:07 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:26.160 14:14:07 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:26.160 14:14:07 -- nvmf/common.sh@511 -- # local dev _ 00:09:26.160 14:14:07 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:26.160 14:14:07 -- nvmf/common.sh@510 -- # nvme list 00:09:26.160 14:14:07 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:09:26.160 14:14:07 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:26.160 14:14:07 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:09:26.160 14:14:07 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:26.160 14:14:07 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:26.160 14:14:07 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:09:26.160 14:14:07 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:26.160 14:14:07 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:26.160 14:14:07 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:09:26.160 14:14:07 -- nvmf/common.sh@513 -- # read -r dev _ 00:09:26.160 14:14:07 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:26.160 14:14:07 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:26.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.419 14:14:07 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:26.419 14:14:07 -- common/autotest_common.sh@1205 -- # local i=0 00:09:26.419 14:14:07 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:26.419 14:14:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.419 14:14:07 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:26.419 14:14:07 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.419 14:14:07 -- common/autotest_common.sh@1217 -- # return 0 00:09:26.419 14:14:07 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:26.419 14:14:07 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:26.419 14:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:26.419 14:14:07 -- common/autotest_common.sh@10 -- # set +x 00:09:26.419 14:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:26.419 14:14:07 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:26.419 14:14:07 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:26.419 14:14:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:26.419 14:14:07 -- nvmf/common.sh@117 -- # sync 00:09:26.419 14:14:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:26.419 14:14:07 -- nvmf/common.sh@120 -- # set +e 00:09:26.419 14:14:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:26.419 14:14:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:26.419 rmmod nvme_tcp 00:09:26.677 rmmod nvme_fabrics 00:09:26.677 rmmod nvme_keyring 00:09:26.677 14:14:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:26.677 14:14:08 -- nvmf/common.sh@124 -- # set -e 00:09:26.677 14:14:08 -- nvmf/common.sh@125 -- # return 0 00:09:26.677 14:14:08 -- nvmf/common.sh@478 -- # '[' -n 3110176 ']' 00:09:26.677 14:14:08 -- nvmf/common.sh@479 -- # killprocess 3110176 00:09:26.677 14:14:08 -- common/autotest_common.sh@936 -- # '[' -z 3110176 ']' 00:09:26.677 14:14:08 -- common/autotest_common.sh@940 -- # kill -0 3110176 00:09:26.677 14:14:08 -- common/autotest_common.sh@941 -- # uname 00:09:26.677 14:14:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:26.677 14:14:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3110176 00:09:26.677 14:14:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:26.677 14:14:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:26.677 14:14:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3110176' 00:09:26.677 killing process with pid 3110176 00:09:26.677 14:14:08 -- common/autotest_common.sh@955 -- # kill 3110176 00:09:26.677 14:14:08 -- common/autotest_common.sh@960 -- # wait 3110176 00:09:26.936 14:14:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:26.936 14:14:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:26.936 14:14:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:26.936 14:14:08 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:26.936 14:14:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:26.936 14:14:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.936 14:14:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:26.936 14:14:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.843 14:14:10 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:28.843 00:09:28.843 real 0m7.763s 00:09:28.843 user 0m14.856s 00:09:28.843 sys 0m1.814s 00:09:28.843 14:14:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:28.843 14:14:10 -- common/autotest_common.sh@10 -- # set +x 00:09:28.843 ************************************ 00:09:28.843 END TEST nvmf_nvme_cli 00:09:28.843 ************************************ 00:09:28.843 14:14:10 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:28.843 14:14:10 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:28.843 14:14:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:28.843 14:14:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:28.843 14:14:10 -- common/autotest_common.sh@10 -- # set +x 00:09:29.103 ************************************ 00:09:29.103 START TEST nvmf_vfio_user 00:09:29.103 ************************************ 00:09:29.103 14:14:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:29.103 * Looking for test storage... 00:09:29.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.103 14:14:10 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.103 14:14:10 -- nvmf/common.sh@7 -- # uname -s 00:09:29.103 14:14:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.103 14:14:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.103 14:14:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.103 14:14:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.103 14:14:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.103 14:14:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.103 14:14:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.103 14:14:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.103 14:14:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.103 14:14:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.103 14:14:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:29.103 14:14:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:29.103 14:14:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.103 14:14:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.103 14:14:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.103 14:14:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.103 14:14:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.103 14:14:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.103 14:14:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.103 14:14:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.103 14:14:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.104 14:14:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.104 14:14:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.104 14:14:10 -- paths/export.sh@5 -- # export PATH 00:09:29.104 14:14:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.104 14:14:10 -- nvmf/common.sh@47 -- # : 0 00:09:29.104 14:14:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:29.104 14:14:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:29.104 14:14:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.104 14:14:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.104 14:14:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.104 14:14:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:29.104 14:14:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:29.104 14:14:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:29.104 14:14:10 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:29.104 14:14:10 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:29.104 14:14:10 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:29.104 14:14:10 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.104 14:14:10 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:29.104 14:14:10 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:29.104 14:14:10 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:29.104 14:14:10 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:29.104 14:14:10 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:29.104 14:14:10 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:29.104 14:14:10 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3110914 00:09:29.104 14:14:10 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:29.104 14:14:10 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3110914' 00:09:29.104 Process pid: 3110914 00:09:29.104 14:14:10 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:29.104 14:14:10 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3110914 00:09:29.104 14:14:10 -- common/autotest_common.sh@817 -- # '[' -z 3110914 ']' 00:09:29.104 14:14:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.104 14:14:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:29.104 14:14:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.104 14:14:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:29.104 14:14:10 -- common/autotest_common.sh@10 -- # set +x 00:09:29.104 [2024-04-26 14:14:10.599606] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:09:29.104 [2024-04-26 14:14:10.599702] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.104 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.104 [2024-04-26 14:14:10.658446] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.362 [2024-04-26 14:14:10.773972] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.362 [2024-04-26 14:14:10.774026] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.362 [2024-04-26 14:14:10.774042] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.362 [2024-04-26 14:14:10.774055] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.362 [2024-04-26 14:14:10.774067] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.362 [2024-04-26 14:14:10.774154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.362 [2024-04-26 14:14:10.774205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.362 [2024-04-26 14:14:10.774255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.362 [2024-04-26 14:14:10.774258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.362 14:14:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:29.362 14:14:10 -- common/autotest_common.sh@850 -- # return 0 00:09:29.362 14:14:10 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:30.736 14:14:11 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:30.736 14:14:12 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:30.736 14:14:12 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:30.736 14:14:12 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:30.736 14:14:12 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:30.736 14:14:12 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:30.994 Malloc1 00:09:30.994 14:14:12 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:31.252 14:14:12 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:31.818 14:14:13 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:31.818 14:14:13 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:31.819 14:14:13 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:31.819 14:14:13 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:32.385 Malloc2 00:09:32.385 14:14:13 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:32.643 14:14:13 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:32.901 14:14:14 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:33.162 14:14:14 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:33.162 14:14:14 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:33.162 14:14:14 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:33.162 14:14:14 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:33.162 14:14:14 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:33.162 14:14:14 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:33.162 [2024-04-26 14:14:14.534944] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:09:33.162 [2024-04-26 14:14:14.534994] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3111248 ] 00:09:33.162 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.162 [2024-04-26 14:14:14.575585] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:33.162 [2024-04-26 14:14:14.578121] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:33.162 [2024-04-26 14:14:14.578152] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb992800000 00:09:33.162 [2024-04-26 14:14:14.579110] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:33.162 [2024-04-26 14:14:14.580108] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:33.162 [2024-04-26 14:14:14.581115] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:33.162 [2024-04-26 14:14:14.582123] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:33.162 [2024-04-26 14:14:14.583130] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:33.162 [2024-04-26 14:14:14.584137] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:33.162 [2024-04-26 14:14:14.585136] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:33.162 [2024-04-26 14:14:14.586146] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:33.162 [2024-04-26 14:14:14.587150] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:33.162 [2024-04-26 14:14:14.587176] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb9927f5000 00:09:33.162 [2024-04-26 14:14:14.588627] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:33.162 [2024-04-26 14:14:14.609536] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:33.162 [2024-04-26 14:14:14.609575] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:33.162 [2024-04-26 14:14:14.614310] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:33.162 [2024-04-26 14:14:14.614382] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:33.162 [2024-04-26 14:14:14.614487] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:33.162 [2024-04-26 14:14:14.614520] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:33.162 [2024-04-26 14:14:14.614533] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:33.162 [2024-04-26 14:14:14.615302] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:33.162 [2024-04-26 14:14:14.615323] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:33.162 [2024-04-26 14:14:14.615337] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:33.162 [2024-04-26 14:14:14.616306] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:33.162 [2024-04-26 14:14:14.616326] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:33.162 [2024-04-26 14:14:14.616341] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:33.162 [2024-04-26 14:14:14.617313] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:33.162 [2024-04-26 14:14:14.617332] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:33.162 [2024-04-26 14:14:14.618316] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:33.162 [2024-04-26 14:14:14.618338] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:33.162 [2024-04-26 14:14:14.618349] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:33.162 [2024-04-26 14:14:14.618363] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:33.162 [2024-04-26 14:14:14.618475] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:33.163 [2024-04-26 14:14:14.618485] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:33.163 [2024-04-26 14:14:14.618495] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:33.163 [2024-04-26 14:14:14.619332] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:33.163 [2024-04-26 14:14:14.620340] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:33.163 [2024-04-26 14:14:14.621342] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:33.163 [2024-04-26 14:14:14.622344] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:33.163 [2024-04-26 14:14:14.622447] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:33.163 [2024-04-26 14:14:14.623367] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:33.163 [2024-04-26 14:14:14.623387] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:33.163 [2024-04-26 14:14:14.623397] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.623425] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:33.163 [2024-04-26 14:14:14.623441] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.623469] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:33.163 [2024-04-26 14:14:14.623480] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:33.163 [2024-04-26 14:14:14.623501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:33.163 [2024-04-26 14:14:14.623563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:33.163 [2024-04-26 14:14:14.623580] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:33.163 [2024-04-26 14:14:14.623591] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:33.163 [2024-04-26 14:14:14.623600] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:33.163 [2024-04-26 14:14:14.623609] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:33.163 [2024-04-26 14:14:14.623618] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:33.163 [2024-04-26 14:14:14.623627] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:33.163 [2024-04-26 14:14:14.623647] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.623662] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.623689] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:33.163 [2024-04-26 14:14:14.623706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:33.163 [2024-04-26 14:14:14.623732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:33.163 [2024-04-26 14:14:14.623748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:33.163 [2024-04-26 14:14:14.623762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:33.163 [2024-04-26 14:14:14.623777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:33.163 [2024-04-26 14:14:14.623786] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.623804] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.623820] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:33.163 [2024-04-26 14:14:14.623838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:33.163 [2024-04-26 14:14:14.623851] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:33.163 [2024-04-26 14:14:14.623861] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.623877] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.623890] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.623904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:33.163 [2024-04-26 14:14:14.623923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:33.163 [2024-04-26 14:14:14.623984] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.624000] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.624015] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:33.163 [2024-04-26 14:14:14.624025] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:33.163 [2024-04-26 14:14:14.624036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:33.163 [2024-04-26 14:14:14.624055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:33.163 [2024-04-26 14:14:14.624073] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:33.163 [2024-04-26 14:14:14.624095] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.624111] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.624125] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:33.163 [2024-04-26 14:14:14.624134] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:33.163 [2024-04-26 14:14:14.624145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:33.163 [2024-04-26 14:14:14.624172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:33.163 [2024-04-26 14:14:14.624196] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.624213] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.624227] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:33.163 [2024-04-26 14:14:14.624237] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:33.163 [2024-04-26 14:14:14.624248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:33.163 [2024-04-26 14:14:14.624265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:33.163 [2024-04-26 14:14:14.624282] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.624296] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.624312] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.624324] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.624334] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.624344] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:33.163 [2024-04-26 14:14:14.624352] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:33.163 [2024-04-26 14:14:14.624362] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:33.163 [2024-04-26 14:14:14.624389] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:33.163 [2024-04-26 14:14:14.624410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:33.163 [2024-04-26 14:14:14.624432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:33.163 [2024-04-26 14:14:14.624446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:33.163 [2024-04-26 14:14:14.624464] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:33.163 [2024-04-26 14:14:14.624478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:33.163 [2024-04-26 14:14:14.624497] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:33.163 [2024-04-26 14:14:14.624510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:33.163 [2024-04-26 14:14:14.624531] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:33.164 [2024-04-26 14:14:14.624542] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:33.164 [2024-04-26 14:14:14.624549] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:33.164 [2024-04-26 14:14:14.624556] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:33.164 [2024-04-26 14:14:14.624567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:33.164 [2024-04-26 14:14:14.624581] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:33.164 [2024-04-26 14:14:14.624590] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:33.164 [2024-04-26 14:14:14.624601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:33.164 [2024-04-26 14:14:14.624614] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:33.164 [2024-04-26 14:14:14.624624] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:33.164 [2024-04-26 14:14:14.624649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:33.164 [2024-04-26 14:14:14.624666] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:33.164 [2024-04-26 14:14:14.624676] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:33.164 [2024-04-26 14:14:14.624686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:33.164 [2024-04-26 14:14:14.624700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:33.164 [2024-04-26 14:14:14.624724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:33.164 [2024-04-26 14:14:14.624743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:33.164 [2024-04-26 14:14:14.624757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:33.164 ===================================================== 00:09:33.164 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:33.164 ===================================================== 00:09:33.164 Controller Capabilities/Features 00:09:33.164 ================================ 00:09:33.164 Vendor ID: 4e58 00:09:33.164 Subsystem Vendor ID: 4e58 00:09:33.164 Serial Number: SPDK1 00:09:33.164 Model Number: SPDK bdev Controller 00:09:33.164 Firmware Version: 24.05 00:09:33.164 Recommended Arb Burst: 6 00:09:33.164 IEEE OUI Identifier: 8d 6b 50 00:09:33.164 Multi-path I/O 00:09:33.164 May have multiple subsystem ports: Yes 00:09:33.164 May have multiple controllers: Yes 00:09:33.164 Associated with SR-IOV VF: No 00:09:33.164 Max Data Transfer Size: 131072 00:09:33.164 Max Number of Namespaces: 32 00:09:33.164 Max Number of I/O Queues: 127 00:09:33.164 NVMe Specification Version (VS): 1.3 00:09:33.164 NVMe Specification Version (Identify): 1.3 00:09:33.164 Maximum Queue Entries: 256 00:09:33.164 Contiguous Queues Required: Yes 00:09:33.164 Arbitration Mechanisms Supported 00:09:33.164 Weighted Round Robin: Not Supported 00:09:33.164 Vendor Specific: Not Supported 00:09:33.164 Reset Timeout: 15000 ms 00:09:33.164 Doorbell Stride: 4 bytes 00:09:33.164 NVM Subsystem Reset: Not Supported 00:09:33.164 Command Sets Supported 00:09:33.164 NVM Command Set: Supported 00:09:33.164 Boot Partition: Not Supported 00:09:33.164 Memory Page Size Minimum: 4096 bytes 00:09:33.164 Memory Page Size Maximum: 4096 bytes 00:09:33.164 Persistent Memory Region: Not Supported 00:09:33.164 Optional Asynchronous Events Supported 00:09:33.164 Namespace Attribute Notices: Supported 00:09:33.164 Firmware Activation Notices: Not Supported 00:09:33.164 ANA Change Notices: Not Supported 00:09:33.164 PLE Aggregate Log Change Notices: Not Supported 00:09:33.164 LBA Status Info Alert Notices: Not Supported 00:09:33.164 EGE Aggregate Log Change Notices: Not Supported 00:09:33.164 Normal NVM Subsystem Shutdown event: Not Supported 00:09:33.164 Zone Descriptor Change Notices: Not Supported 00:09:33.164 Discovery Log Change Notices: Not Supported 00:09:33.164 Controller Attributes 00:09:33.164 128-bit Host Identifier: Supported 00:09:33.164 Non-Operational Permissive Mode: Not Supported 00:09:33.164 NVM Sets: Not Supported 00:09:33.164 Read Recovery Levels: Not Supported 00:09:33.164 Endurance Groups: Not Supported 00:09:33.164 Predictable Latency Mode: Not Supported 00:09:33.164 Traffic Based Keep ALive: Not Supported 00:09:33.164 Namespace Granularity: Not Supported 00:09:33.164 SQ Associations: Not Supported 00:09:33.164 UUID List: Not Supported 00:09:33.164 Multi-Domain Subsystem: Not Supported 00:09:33.164 Fixed Capacity Management: Not Supported 00:09:33.164 Variable Capacity Management: Not Supported 00:09:33.164 Delete Endurance Group: Not Supported 00:09:33.164 Delete NVM Set: Not Supported 00:09:33.164 Extended LBA Formats Supported: Not Supported 00:09:33.164 Flexible Data Placement Supported: Not Supported 00:09:33.164 00:09:33.164 Controller Memory Buffer Support 00:09:33.164 ================================ 00:09:33.164 Supported: No 00:09:33.164 00:09:33.164 Persistent Memory Region Support 00:09:33.164 ================================ 00:09:33.164 Supported: No 00:09:33.164 00:09:33.164 Admin Command Set Attributes 00:09:33.164 ============================ 00:09:33.164 Security Send/Receive: Not Supported 00:09:33.164 Format NVM: Not Supported 00:09:33.164 Firmware Activate/Download: Not Supported 00:09:33.164 Namespace Management: Not Supported 00:09:33.164 Device Self-Test: Not Supported 00:09:33.164 Directives: Not Supported 00:09:33.164 NVMe-MI: Not Supported 00:09:33.164 Virtualization Management: Not Supported 00:09:33.164 Doorbell Buffer Config: Not Supported 00:09:33.164 Get LBA Status Capability: Not Supported 00:09:33.164 Command & Feature Lockdown Capability: Not Supported 00:09:33.164 Abort Command Limit: 4 00:09:33.164 Async Event Request Limit: 4 00:09:33.164 Number of Firmware Slots: N/A 00:09:33.164 Firmware Slot 1 Read-Only: N/A 00:09:33.164 Firmware Activation Without Reset: N/A 00:09:33.164 Multiple Update Detection Support: N/A 00:09:33.164 Firmware Update Granularity: No Information Provided 00:09:33.164 Per-Namespace SMART Log: No 00:09:33.164 Asymmetric Namespace Access Log Page: Not Supported 00:09:33.164 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:33.164 Command Effects Log Page: Supported 00:09:33.164 Get Log Page Extended Data: Supported 00:09:33.164 Telemetry Log Pages: Not Supported 00:09:33.164 Persistent Event Log Pages: Not Supported 00:09:33.164 Supported Log Pages Log Page: May Support 00:09:33.164 Commands Supported & Effects Log Page: Not Supported 00:09:33.164 Feature Identifiers & Effects Log Page:May Support 00:09:33.164 NVMe-MI Commands & Effects Log Page: May Support 00:09:33.164 Data Area 4 for Telemetry Log: Not Supported 00:09:33.164 Error Log Page Entries Supported: 128 00:09:33.164 Keep Alive: Supported 00:09:33.164 Keep Alive Granularity: 10000 ms 00:09:33.164 00:09:33.164 NVM Command Set Attributes 00:09:33.164 ========================== 00:09:33.164 Submission Queue Entry Size 00:09:33.164 Max: 64 00:09:33.164 Min: 64 00:09:33.164 Completion Queue Entry Size 00:09:33.164 Max: 16 00:09:33.164 Min: 16 00:09:33.164 Number of Namespaces: 32 00:09:33.164 Compare Command: Supported 00:09:33.164 Write Uncorrectable Command: Not Supported 00:09:33.164 Dataset Management Command: Supported 00:09:33.164 Write Zeroes Command: Supported 00:09:33.164 Set Features Save Field: Not Supported 00:09:33.164 Reservations: Not Supported 00:09:33.164 Timestamp: Not Supported 00:09:33.164 Copy: Supported 00:09:33.164 Volatile Write Cache: Present 00:09:33.164 Atomic Write Unit (Normal): 1 00:09:33.164 Atomic Write Unit (PFail): 1 00:09:33.164 Atomic Compare & Write Unit: 1 00:09:33.164 Fused Compare & Write: Supported 00:09:33.164 Scatter-Gather List 00:09:33.164 SGL Command Set: Supported (Dword aligned) 00:09:33.164 SGL Keyed: Not Supported 00:09:33.164 SGL Bit Bucket Descriptor: Not Supported 00:09:33.164 SGL Metadata Pointer: Not Supported 00:09:33.164 Oversized SGL: Not Supported 00:09:33.164 SGL Metadata Address: Not Supported 00:09:33.164 SGL Offset: Not Supported 00:09:33.164 Transport SGL Data Block: Not Supported 00:09:33.164 Replay Protected Memory Block: Not Supported 00:09:33.164 00:09:33.164 Firmware Slot Information 00:09:33.164 ========================= 00:09:33.164 Active slot: 1 00:09:33.164 Slot 1 Firmware Revision: 24.05 00:09:33.164 00:09:33.164 00:09:33.164 Commands Supported and Effects 00:09:33.164 ============================== 00:09:33.164 Admin Commands 00:09:33.164 -------------- 00:09:33.164 Get Log Page (02h): Supported 00:09:33.165 Identify (06h): Supported 00:09:33.165 Abort (08h): Supported 00:09:33.165 Set Features (09h): Supported 00:09:33.165 Get Features (0Ah): Supported 00:09:33.165 Asynchronous Event Request (0Ch): Supported 00:09:33.165 Keep Alive (18h): Supported 00:09:33.165 I/O Commands 00:09:33.165 ------------ 00:09:33.165 Flush (00h): Supported LBA-Change 00:09:33.165 Write (01h): Supported LBA-Change 00:09:33.165 Read (02h): Supported 00:09:33.165 Compare (05h): Supported 00:09:33.165 Write Zeroes (08h): Supported LBA-Change 00:09:33.165 Dataset Management (09h): Supported LBA-Change 00:09:33.165 Copy (19h): Supported LBA-Change 00:09:33.165 Unknown (79h): Supported LBA-Change 00:09:33.165 Unknown (7Ah): Supported 00:09:33.165 00:09:33.165 Error Log 00:09:33.165 ========= 00:09:33.165 00:09:33.165 Arbitration 00:09:33.165 =========== 00:09:33.165 Arbitration Burst: 1 00:09:33.165 00:09:33.165 Power Management 00:09:33.165 ================ 00:09:33.165 Number of Power States: 1 00:09:33.165 Current Power State: Power State #0 00:09:33.165 Power State #0: 00:09:33.165 Max Power: 0.00 W 00:09:33.165 Non-Operational State: Operational 00:09:33.165 Entry Latency: Not Reported 00:09:33.165 Exit Latency: Not Reported 00:09:33.165 Relative Read Throughput: 0 00:09:33.165 Relative Read Latency: 0 00:09:33.165 Relative Write Throughput: 0 00:09:33.165 Relative Write Latency: 0 00:09:33.165 Idle Power: Not Reported 00:09:33.165 Active Power: Not Reported 00:09:33.165 Non-Operational Permissive Mode: Not Supported 00:09:33.165 00:09:33.165 Health Information 00:09:33.165 ================== 00:09:33.165 Critical Warnings: 00:09:33.165 Available Spare Space: OK 00:09:33.165 Temperature: OK 00:09:33.165 Device Reliability: OK 00:09:33.165 Read Only: No 00:09:33.165 Volatile Memory Backup: OK 00:09:33.165 Current Temperature: 0 Kelvin (-2[2024-04-26 14:14:14.624904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:33.165 [2024-04-26 14:14:14.624923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:33.165 [2024-04-26 14:14:14.624964] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:33.165 [2024-04-26 14:14:14.624983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:33.165 [2024-04-26 14:14:14.624996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:33.165 [2024-04-26 14:14:14.625008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:33.165 [2024-04-26 14:14:14.625020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:33.165 [2024-04-26 14:14:14.628653] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:33.165 [2024-04-26 14:14:14.628681] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:33.165 [2024-04-26 14:14:14.629382] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:33.165 [2024-04-26 14:14:14.629459] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:33.165 [2024-04-26 14:14:14.629473] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:33.165 [2024-04-26 14:14:14.630389] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:33.165 [2024-04-26 14:14:14.630415] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:33.165 [2024-04-26 14:14:14.630488] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:33.165 [2024-04-26 14:14:14.632435] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:33.165 73 Celsius) 00:09:33.165 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:33.165 Available Spare: 0% 00:09:33.165 Available Spare Threshold: 0% 00:09:33.165 Life Percentage Used: 0% 00:09:33.165 Data Units Read: 0 00:09:33.165 Data Units Written: 0 00:09:33.165 Host Read Commands: 0 00:09:33.165 Host Write Commands: 0 00:09:33.165 Controller Busy Time: 0 minutes 00:09:33.165 Power Cycles: 0 00:09:33.165 Power On Hours: 0 hours 00:09:33.165 Unsafe Shutdowns: 0 00:09:33.165 Unrecoverable Media Errors: 0 00:09:33.165 Lifetime Error Log Entries: 0 00:09:33.165 Warning Temperature Time: 0 minutes 00:09:33.165 Critical Temperature Time: 0 minutes 00:09:33.165 00:09:33.165 Number of Queues 00:09:33.165 ================ 00:09:33.165 Number of I/O Submission Queues: 127 00:09:33.165 Number of I/O Completion Queues: 127 00:09:33.165 00:09:33.165 Active Namespaces 00:09:33.165 ================= 00:09:33.165 Namespace ID:1 00:09:33.165 Error Recovery Timeout: Unlimited 00:09:33.165 Command Set Identifier: NVM (00h) 00:09:33.165 Deallocate: Supported 00:09:33.165 Deallocated/Unwritten Error: Not Supported 00:09:33.165 Deallocated Read Value: Unknown 00:09:33.165 Deallocate in Write Zeroes: Not Supported 00:09:33.165 Deallocated Guard Field: 0xFFFF 00:09:33.165 Flush: Supported 00:09:33.165 Reservation: Supported 00:09:33.165 Namespace Sharing Capabilities: Multiple Controllers 00:09:33.165 Size (in LBAs): 131072 (0GiB) 00:09:33.165 Capacity (in LBAs): 131072 (0GiB) 00:09:33.165 Utilization (in LBAs): 131072 (0GiB) 00:09:33.165 NGUID: F26ACF7AD7D24DBEAEB20BFEA970C77E 00:09:33.165 UUID: f26acf7a-d7d2-4dbe-aeb2-0bfea970c77e 00:09:33.165 Thin Provisioning: Not Supported 00:09:33.165 Per-NS Atomic Units: Yes 00:09:33.165 Atomic Boundary Size (Normal): 0 00:09:33.165 Atomic Boundary Size (PFail): 0 00:09:33.165 Atomic Boundary Offset: 0 00:09:33.165 Maximum Single Source Range Length: 65535 00:09:33.165 Maximum Copy Length: 65535 00:09:33.165 Maximum Source Range Count: 1 00:09:33.165 NGUID/EUI64 Never Reused: No 00:09:33.165 Namespace Write Protected: No 00:09:33.165 Number of LBA Formats: 1 00:09:33.165 Current LBA Format: LBA Format #00 00:09:33.165 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:33.165 00:09:33.165 14:14:14 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:33.165 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.424 [2024-04-26 14:14:14.859406] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:38.691 [2024-04-26 14:14:19.885610] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:38.691 Initializing NVMe Controllers 00:09:38.691 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:38.691 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:38.691 Initialization complete. Launching workers. 00:09:38.691 ======================================================== 00:09:38.691 Latency(us) 00:09:38.691 Device Information : IOPS MiB/s Average min max 00:09:38.691 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 24118.80 94.21 5311.53 1487.79 7648.12 00:09:38.691 ======================================================== 00:09:38.691 Total : 24118.80 94.21 5311.53 1487.79 7648.12 00:09:38.691 00:09:38.691 14:14:19 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:38.691 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.691 [2024-04-26 14:14:20.115830] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:43.954 [2024-04-26 14:14:25.158771] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:43.954 Initializing NVMe Controllers 00:09:43.954 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:43.954 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:43.954 Initialization complete. Launching workers. 00:09:43.954 ======================================================== 00:09:43.954 Latency(us) 00:09:43.955 Device Information : IOPS MiB/s Average min max 00:09:43.955 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.87 62.70 7979.66 7067.31 14137.18 00:09:43.955 ======================================================== 00:09:43.955 Total : 16050.87 62.70 7979.66 7067.31 14137.18 00:09:43.955 00:09:43.955 14:14:25 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:43.955 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.955 [2024-04-26 14:14:25.392972] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:49.223 [2024-04-26 14:14:30.462914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:49.223 Initializing NVMe Controllers 00:09:49.223 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:49.223 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:49.223 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:09:49.223 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:09:49.223 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:09:49.223 Initialization complete. Launching workers. 00:09:49.223 Starting thread on core 2 00:09:49.223 Starting thread on core 3 00:09:49.223 Starting thread on core 1 00:09:49.223 14:14:30 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:09:49.223 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.223 [2024-04-26 14:14:30.755110] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:52.512 [2024-04-26 14:14:33.815770] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:52.512 Initializing NVMe Controllers 00:09:52.512 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:52.512 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:52.512 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:09:52.512 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:09:52.512 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:09:52.512 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:09:52.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:09:52.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:09:52.512 Initialization complete. Launching workers. 00:09:52.512 Starting thread on core 1 with urgent priority queue 00:09:52.512 Starting thread on core 2 with urgent priority queue 00:09:52.512 Starting thread on core 3 with urgent priority queue 00:09:52.512 Starting thread on core 0 with urgent priority queue 00:09:52.512 SPDK bdev Controller (SPDK1 ) core 0: 8707.67 IO/s 11.48 secs/100000 ios 00:09:52.512 SPDK bdev Controller (SPDK1 ) core 1: 6517.00 IO/s 15.34 secs/100000 ios 00:09:52.512 SPDK bdev Controller (SPDK1 ) core 2: 6510.67 IO/s 15.36 secs/100000 ios 00:09:52.512 SPDK bdev Controller (SPDK1 ) core 3: 7803.33 IO/s 12.82 secs/100000 ios 00:09:52.512 ======================================================== 00:09:52.512 00:09:52.512 14:14:33 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:52.512 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.770 [2024-04-26 14:14:34.105191] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:52.770 [2024-04-26 14:14:34.138911] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:52.770 Initializing NVMe Controllers 00:09:52.770 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:52.770 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:52.770 Namespace ID: 1 size: 0GB 00:09:52.770 Initialization complete. 00:09:52.770 INFO: using host memory buffer for IO 00:09:52.770 Hello world! 00:09:52.770 14:14:34 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:52.770 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.028 [2024-04-26 14:14:34.417009] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:53.963 Initializing NVMe Controllers 00:09:53.963 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:53.963 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:53.963 Initialization complete. Launching workers. 00:09:53.963 submit (in ns) avg, min, max = 11224.6, 4512.6, 4033197.0 00:09:53.963 complete (in ns) avg, min, max = 27434.8, 2617.8, 4050352.6 00:09:53.963 00:09:53.963 Submit histogram 00:09:53.963 ================ 00:09:53.963 Range in us Cumulative Count 00:09:53.963 4.504 - 4.527: 0.0084% ( 1) 00:09:53.963 4.527 - 4.551: 0.0167% ( 1) 00:09:53.963 4.551 - 4.575: 0.2840% ( 32) 00:09:53.964 4.575 - 4.599: 1.2529% ( 116) 00:09:53.964 4.599 - 4.622: 3.9008% ( 317) 00:09:53.964 4.622 - 4.646: 8.3194% ( 529) 00:09:53.964 4.646 - 4.670: 12.4541% ( 495) 00:09:53.964 4.670 - 4.693: 15.2188% ( 331) 00:09:53.964 4.693 - 4.717: 16.0708% ( 102) 00:09:53.964 4.717 - 4.741: 16.5553% ( 58) 00:09:53.964 4.741 - 4.764: 17.4658% ( 109) 00:09:53.964 4.764 - 4.788: 19.2783% ( 217) 00:09:53.964 4.788 - 4.812: 23.6301% ( 521) 00:09:53.964 4.812 - 4.836: 29.4103% ( 692) 00:09:53.964 4.836 - 4.859: 35.2155% ( 695) 00:09:53.964 4.859 - 4.883: 37.3371% ( 254) 00:09:53.964 4.883 - 4.907: 38.0221% ( 82) 00:09:53.964 4.907 - 4.930: 38.3812% ( 43) 00:09:53.964 4.930 - 4.954: 39.0327% ( 78) 00:09:53.964 4.954 - 4.978: 39.7344% ( 84) 00:09:53.964 4.978 - 5.001: 40.8453% ( 133) 00:09:53.964 5.001 - 5.025: 41.9562% ( 133) 00:09:53.964 5.025 - 5.049: 42.8583% ( 108) 00:09:53.964 5.049 - 5.073: 43.3762% ( 62) 00:09:53.964 5.073 - 5.096: 43.7771% ( 48) 00:09:53.964 5.096 - 5.120: 43.9693% ( 23) 00:09:53.964 5.120 - 5.144: 44.0277% ( 7) 00:09:53.964 5.144 - 5.167: 44.1280% ( 12) 00:09:53.964 5.167 - 5.191: 44.4621% ( 40) 00:09:53.964 5.191 - 5.215: 45.9405% ( 177) 00:09:53.964 5.215 - 5.239: 50.1587% ( 505) 00:09:53.964 5.239 - 5.262: 56.6739% ( 780) 00:09:53.964 5.262 - 5.286: 61.7608% ( 609) 00:09:53.964 5.286 - 5.310: 62.5459% ( 94) 00:09:53.964 5.310 - 5.333: 63.4146% ( 104) 00:09:53.964 5.333 - 5.357: 64.6091% ( 143) 00:09:53.964 5.357 - 5.381: 67.1985% ( 310) 00:09:53.964 5.381 - 5.404: 70.9238% ( 446) 00:09:53.964 5.404 - 5.428: 75.5179% ( 550) 00:09:53.964 5.428 - 5.452: 76.6288% ( 133) 00:09:53.964 5.452 - 5.476: 77.6980% ( 128) 00:09:53.964 5.476 - 5.499: 79.1931% ( 179) 00:09:53.964 5.499 - 5.523: 80.2205% ( 123) 00:09:53.964 5.523 - 5.547: 80.3625% ( 17) 00:09:53.964 5.547 - 5.570: 80.4544% ( 11) 00:09:53.964 5.570 - 5.594: 81.2813% ( 99) 00:09:53.964 5.594 - 5.618: 84.2716% ( 358) 00:09:53.964 5.618 - 5.641: 89.0244% ( 569) 00:09:53.964 5.641 - 5.665: 93.8273% ( 575) 00:09:53.964 5.665 - 5.689: 94.7127% ( 106) 00:09:53.964 5.689 - 5.713: 95.1637% ( 54) 00:09:53.964 5.713 - 5.736: 95.3809% ( 26) 00:09:53.964 5.736 - 5.760: 95.5563% ( 21) 00:09:53.964 5.760 - 5.784: 95.7066% ( 18) 00:09:53.964 5.784 - 5.807: 95.7985% ( 11) 00:09:53.964 5.807 - 5.831: 95.8904% ( 11) 00:09:53.964 5.831 - 5.855: 95.9823% ( 11) 00:09:53.964 5.855 - 5.879: 96.2162% ( 28) 00:09:53.964 5.879 - 5.902: 96.2997% ( 10) 00:09:53.964 5.902 - 5.926: 96.5002% ( 24) 00:09:53.964 5.926 - 5.950: 96.5252% ( 3) 00:09:53.964 5.950 - 5.973: 96.5837% ( 7) 00:09:53.964 5.973 - 5.997: 96.6422% ( 7) 00:09:53.964 5.997 - 6.021: 96.7090% ( 8) 00:09:53.964 6.021 - 6.044: 96.7675% ( 7) 00:09:53.964 6.044 - 6.068: 96.8426% ( 9) 00:09:53.964 6.068 - 6.116: 96.9596% ( 14) 00:09:53.964 6.116 - 6.163: 97.1684% ( 25) 00:09:53.964 6.163 - 6.210: 97.4524% ( 34) 00:09:53.964 6.210 - 6.258: 97.5192% ( 8) 00:09:53.964 6.258 - 6.305: 97.6612% ( 17) 00:09:53.964 6.305 - 6.353: 97.8867% ( 27) 00:09:53.964 6.353 - 6.400: 98.0621% ( 21) 00:09:53.964 6.400 - 6.447: 98.2041% ( 17) 00:09:53.964 6.495 - 6.542: 98.2626% ( 7) 00:09:53.964 6.542 - 6.590: 98.3963% ( 16) 00:09:53.964 6.590 - 6.637: 98.4046% ( 1) 00:09:53.964 6.637 - 6.684: 98.4464% ( 5) 00:09:53.964 6.684 - 6.732: 98.5216% ( 9) 00:09:53.964 6.732 - 6.779: 98.5800% ( 7) 00:09:53.964 6.779 - 6.827: 98.5884% ( 1) 00:09:53.964 6.827 - 6.874: 98.6051% ( 2) 00:09:53.964 6.874 - 6.921: 98.6635% ( 7) 00:09:53.964 6.921 - 6.969: 98.9225% ( 31) 00:09:53.964 6.969 - 7.016: 99.0311% ( 13) 00:09:53.964 7.016 - 7.064: 99.0895% ( 7) 00:09:53.964 7.064 - 7.111: 99.1062% ( 2) 00:09:53.964 7.206 - 7.253: 99.1146% ( 1) 00:09:53.964 7.348 - 7.396: 99.1230% ( 1) 00:09:53.964 7.490 - 7.538: 99.1313% ( 1) 00:09:53.964 7.870 - 7.917: 99.1397% ( 1) 00:09:53.964 8.107 - 8.154: 99.1647% ( 3) 00:09:53.964 8.249 - 8.296: 99.1731% ( 1) 00:09:53.964 8.344 - 8.391: 99.1814% ( 1) 00:09:53.964 8.486 - 8.533: 99.1981% ( 2) 00:09:53.964 8.581 - 8.628: 99.2315% ( 4) 00:09:53.964 8.628 - 8.676: 99.2399% ( 1) 00:09:53.964 8.723 - 8.770: 99.2566% ( 2) 00:09:53.964 8.770 - 8.818: 99.2733% ( 2) 00:09:53.964 8.818 - 8.865: 99.2817% ( 1) 00:09:53.964 8.865 - 8.913: 99.2984% ( 2) 00:09:53.964 8.960 - 9.007: 99.3151% ( 2) 00:09:53.964 9.007 - 9.055: 99.3318% ( 2) 00:09:53.964 9.055 - 9.102: 99.3401% ( 1) 00:09:53.964 9.197 - 9.244: 99.3485% ( 1) 00:09:53.964 9.244 - 9.292: 99.3652% ( 2) 00:09:53.964 9.292 - 9.339: 99.3902% ( 3) 00:09:53.964 9.339 - 9.387: 99.4069% ( 2) 00:09:53.964 9.387 - 9.434: 99.4237% ( 2) 00:09:53.964 9.434 - 9.481: 99.4320% ( 1) 00:09:53.964 9.481 - 9.529: 99.4487% ( 2) 00:09:53.964 9.576 - 9.624: 99.4571% ( 1) 00:09:53.964 9.624 - 9.671: 99.4654% ( 1) 00:09:53.964 9.766 - 9.813: 99.4738% ( 1) 00:09:53.964 9.813 - 9.861: 99.4821% ( 1) 00:09:53.964 9.861 - 9.908: 99.4988% ( 2) 00:09:53.964 9.908 - 9.956: 99.5072% ( 1) 00:09:53.964 10.098 - 10.145: 99.5155% ( 1) 00:09:53.964 10.145 - 10.193: 99.5406% ( 3) 00:09:53.964 10.193 - 10.240: 99.5489% ( 1) 00:09:53.964 10.240 - 10.287: 99.5573% ( 1) 00:09:53.964 10.382 - 10.430: 99.5657% ( 1) 00:09:53.964 10.477 - 10.524: 99.5824% ( 2) 00:09:53.964 10.524 - 10.572: 99.5991% ( 2) 00:09:53.964 10.856 - 10.904: 99.6074% ( 1) 00:09:53.964 10.904 - 10.951: 99.6158% ( 1) 00:09:53.964 11.046 - 11.093: 99.6241% ( 1) 00:09:53.964 11.093 - 11.141: 99.6325% ( 1) 00:09:53.964 11.141 - 11.188: 99.6408% ( 1) 00:09:53.964 11.236 - 11.283: 99.6659% ( 3) 00:09:53.964 11.330 - 11.378: 99.6826% ( 2) 00:09:53.964 11.378 - 11.425: 99.6909% ( 1) 00:09:53.964 11.473 - 11.520: 99.6993% ( 1) 00:09:53.964 11.567 - 11.615: 99.7077% ( 1) 00:09:53.964 11.852 - 11.899: 99.7160% ( 1) 00:09:53.964 11.899 - 11.947: 99.7244% ( 1) 00:09:53.964 12.089 - 12.136: 99.7327% ( 1) 00:09:53.964 12.421 - 12.516: 99.7411% ( 1) 00:09:53.964 12.516 - 12.610: 99.7494% ( 1) 00:09:53.964 12.610 - 12.705: 99.7578% ( 1) 00:09:53.964 12.800 - 12.895: 99.7661% ( 1) 00:09:53.964 12.990 - 13.084: 99.7745% ( 1) 00:09:53.964 13.369 - 13.464: 99.7912% ( 2) 00:09:53.964 13.464 - 13.559: 99.7995% ( 1) 00:09:53.964 13.748 - 13.843: 99.8413% ( 5) 00:09:53.964 14.033 - 14.127: 99.8496% ( 1) 00:09:53.964 3980.705 - 4004.978: 99.8998% ( 6) 00:09:53.964 4004.978 - 4029.250: 99.9833% ( 10) 00:09:53.964 4029.250 - 4053.523: 100.0000% ( 2) 00:09:53.964 00:09:53.964 Complete histogram 00:09:53.964 ================== 00:09:53.964 Range in us Cumulative Count 00:09:53.964 2.607 - 2.619: 0.0084% ( 1) 00:09:53.964 2.619 - 2.631: 1.0274% ( 122) 00:09:53.964 2.631 - 2.643: 9.3134% ( 992) 00:09:53.964 2.643 - 2.655: 15.3608% ( 724) 00:09:53.964 2.655 - 2.667: 20.1303% ( 571) 00:09:53.964 2.667 - 2.679: 39.6509% ( 2337) 00:09:53.964 2.679 - 2.690: 66.7641% ( 3246) 00:09:53.964 2.690 - 2.702: 77.4975% ( 1285) 00:09:53.964 2.702 - 2.714: 88.0555% ( 1264) 00:09:53.964 2.714 - 2.726: 94.0277% ( 715) 00:09:53.964 2.726 - 2.738: 96.3916% ( 283) 00:09:53.964 2.738 - 2.750: 97.5359% ( 137) 00:09:53.964 2.750 - 2.761: 97.8784% ( 41) 00:09:53.964 2.761 - 2.773: 98.0538% ( 21) 00:09:53.964 2.773 - 2.785: 98.1457% ( 11) 00:09:53.964 2.785 - 2.7[2024-04-26 14:14:35.443288] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:53.964 97: 98.2459% ( 12) 00:09:53.964 2.797 - 2.809: 98.2877% ( 5) 00:09:53.964 2.809 - 2.821: 98.3461% ( 7) 00:09:53.964 2.821 - 2.833: 98.3545% ( 1) 00:09:53.964 2.844 - 2.856: 98.3628% ( 1) 00:09:53.964 2.868 - 2.880: 98.3796% ( 2) 00:09:53.964 2.880 - 2.892: 98.4130% ( 4) 00:09:53.964 2.904 - 2.916: 98.4464% ( 4) 00:09:53.964 2.916 - 2.927: 98.4631% ( 2) 00:09:53.964 2.927 - 2.939: 98.4965% ( 4) 00:09:53.964 2.939 - 2.951: 98.5048% ( 1) 00:09:53.964 2.975 - 2.987: 98.5216% ( 2) 00:09:53.964 3.010 - 3.022: 98.5299% ( 1) 00:09:53.964 3.022 - 3.034: 98.5466% ( 2) 00:09:53.964 3.034 - 3.058: 98.5550% ( 1) 00:09:53.964 3.081 - 3.105: 98.5633% ( 1) 00:09:53.964 3.105 - 3.129: 98.5717% ( 1) 00:09:53.964 3.129 - 3.153: 98.5967% ( 3) 00:09:53.964 3.176 - 3.200: 98.6051% ( 1) 00:09:53.964 3.200 - 3.224: 98.6218% ( 2) 00:09:53.964 3.224 - 3.247: 98.6468% ( 3) 00:09:53.965 3.247 - 3.271: 98.6635% ( 2) 00:09:53.965 3.271 - 3.295: 98.6886% ( 3) 00:09:53.965 3.295 - 3.319: 98.7220% ( 4) 00:09:53.965 3.319 - 3.342: 98.7554% ( 4) 00:09:53.965 3.342 - 3.366: 98.7721% ( 2) 00:09:53.965 3.413 - 3.437: 98.7805% ( 1) 00:09:53.965 3.437 - 3.461: 98.7888% ( 1) 00:09:53.965 3.461 - 3.484: 98.8390% ( 6) 00:09:53.965 3.484 - 3.508: 98.8724% ( 4) 00:09:53.965 3.508 - 3.532: 98.9058% ( 4) 00:09:53.965 3.532 - 3.556: 98.9392% ( 4) 00:09:53.965 3.556 - 3.579: 98.9810% ( 5) 00:09:53.965 3.579 - 3.603: 98.9977% ( 2) 00:09:53.965 3.603 - 3.627: 99.0144% ( 2) 00:09:53.965 3.627 - 3.650: 99.0311% ( 2) 00:09:53.965 3.650 - 3.674: 99.0478% ( 2) 00:09:53.965 3.674 - 3.698: 99.0561% ( 1) 00:09:53.965 3.698 - 3.721: 99.0728% ( 2) 00:09:53.965 3.721 - 3.745: 99.0812% ( 1) 00:09:53.965 3.745 - 3.769: 99.0895% ( 1) 00:09:53.965 3.959 - 3.982: 99.0979% ( 1) 00:09:53.965 4.053 - 4.077: 99.1062% ( 1) 00:09:53.965 4.243 - 4.267: 99.1146% ( 1) 00:09:53.965 4.527 - 4.551: 99.1230% ( 1) 00:09:53.965 4.646 - 4.670: 99.1313% ( 1) 00:09:53.965 4.812 - 4.836: 99.1397% ( 1) 00:09:53.965 4.978 - 5.001: 99.1480% ( 1) 00:09:53.965 5.333 - 5.357: 99.1564% ( 1) 00:09:53.965 5.879 - 5.902: 99.1647% ( 1) 00:09:53.965 6.116 - 6.163: 99.1731% ( 1) 00:09:53.965 6.163 - 6.210: 99.1814% ( 1) 00:09:53.965 6.353 - 6.400: 99.1981% ( 2) 00:09:53.965 6.590 - 6.637: 99.2065% ( 1) 00:09:53.965 6.637 - 6.684: 99.2148% ( 1) 00:09:53.965 6.732 - 6.779: 99.2232% ( 1) 00:09:53.965 7.064 - 7.111: 99.2399% ( 2) 00:09:53.965 7.159 - 7.206: 99.2482% ( 1) 00:09:53.965 7.301 - 7.348: 99.2566% ( 1) 00:09:53.965 7.396 - 7.443: 99.2650% ( 1) 00:09:53.965 7.443 - 7.490: 99.2733% ( 1) 00:09:53.965 7.870 - 7.917: 99.2817% ( 1) 00:09:53.965 7.917 - 7.964: 99.2900% ( 1) 00:09:53.965 8.249 - 8.296: 99.2984% ( 1) 00:09:53.965 8.296 - 8.344: 99.3067% ( 1) 00:09:53.965 8.533 - 8.581: 99.3151% ( 1) 00:09:53.965 8.960 - 9.007: 99.3234% ( 1) 00:09:53.965 9.671 - 9.719: 99.3318% ( 1) 00:09:53.965 9.956 - 10.003: 99.3401% ( 1) 00:09:53.965 10.382 - 10.430: 99.3485% ( 1) 00:09:53.965 10.714 - 10.761: 99.3568% ( 1) 00:09:53.965 11.710 - 11.757: 99.3652% ( 1) 00:09:53.965 12.231 - 12.326: 99.3735% ( 1) 00:09:53.965 12.610 - 12.705: 99.3819% ( 1) 00:09:53.965 3980.705 - 4004.978: 99.7912% ( 49) 00:09:53.965 4004.978 - 4029.250: 99.9749% ( 22) 00:09:53.965 4029.250 - 4053.523: 100.0000% ( 3) 00:09:53.965 00:09:53.965 14:14:35 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:09:53.965 14:14:35 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:53.965 14:14:35 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:09:53.965 14:14:35 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:09:53.965 14:14:35 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:54.223 [2024-04-26 14:14:35.774645] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:09:54.223 [ 00:09:54.223 { 00:09:54.223 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:54.223 "subtype": "Discovery", 00:09:54.223 "listen_addresses": [], 00:09:54.223 "allow_any_host": true, 00:09:54.223 "hosts": [] 00:09:54.223 }, 00:09:54.223 { 00:09:54.223 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:54.223 "subtype": "NVMe", 00:09:54.223 "listen_addresses": [ 00:09:54.223 { 00:09:54.223 "transport": "VFIOUSER", 00:09:54.223 "trtype": "VFIOUSER", 00:09:54.223 "adrfam": "IPv4", 00:09:54.223 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:54.223 "trsvcid": "0" 00:09:54.223 } 00:09:54.223 ], 00:09:54.223 "allow_any_host": true, 00:09:54.223 "hosts": [], 00:09:54.223 "serial_number": "SPDK1", 00:09:54.223 "model_number": "SPDK bdev Controller", 00:09:54.223 "max_namespaces": 32, 00:09:54.223 "min_cntlid": 1, 00:09:54.223 "max_cntlid": 65519, 00:09:54.223 "namespaces": [ 00:09:54.223 { 00:09:54.223 "nsid": 1, 00:09:54.223 "bdev_name": "Malloc1", 00:09:54.223 "name": "Malloc1", 00:09:54.223 "nguid": "F26ACF7AD7D24DBEAEB20BFEA970C77E", 00:09:54.223 "uuid": "f26acf7a-d7d2-4dbe-aeb2-0bfea970c77e" 00:09:54.223 } 00:09:54.223 ] 00:09:54.223 }, 00:09:54.223 { 00:09:54.223 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:54.223 "subtype": "NVMe", 00:09:54.224 "listen_addresses": [ 00:09:54.224 { 00:09:54.224 "transport": "VFIOUSER", 00:09:54.224 "trtype": "VFIOUSER", 00:09:54.224 "adrfam": "IPv4", 00:09:54.224 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:54.224 "trsvcid": "0" 00:09:54.224 } 00:09:54.224 ], 00:09:54.224 "allow_any_host": true, 00:09:54.224 "hosts": [], 00:09:54.224 "serial_number": "SPDK2", 00:09:54.224 "model_number": "SPDK bdev Controller", 00:09:54.224 "max_namespaces": 32, 00:09:54.224 "min_cntlid": 1, 00:09:54.224 "max_cntlid": 65519, 00:09:54.224 "namespaces": [ 00:09:54.224 { 00:09:54.224 "nsid": 1, 00:09:54.224 "bdev_name": "Malloc2", 00:09:54.224 "name": "Malloc2", 00:09:54.224 "nguid": "8DAA769721EB4C4D9BD1B6777A34807E", 00:09:54.224 "uuid": "8daa7697-21eb-4c4d-9bd1-b6777a34807e" 00:09:54.224 } 00:09:54.224 ] 00:09:54.224 } 00:09:54.224 ] 00:09:54.481 14:14:35 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:09:54.481 14:14:35 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3113263 00:09:54.482 14:14:35 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:09:54.482 14:14:35 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:09:54.482 14:14:35 -- common/autotest_common.sh@1251 -- # local i=0 00:09:54.482 14:14:35 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:54.482 14:14:35 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:54.482 14:14:35 -- common/autotest_common.sh@1262 -- # return 0 00:09:54.482 14:14:35 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:09:54.482 14:14:35 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:09:54.482 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.482 [2024-04-26 14:14:35.956168] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:54.740 Malloc3 00:09:54.740 14:14:36 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:09:54.999 [2024-04-26 14:14:36.400332] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:54.999 14:14:36 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:54.999 Asynchronous Event Request test 00:09:54.999 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:54.999 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:54.999 Registering asynchronous event callbacks... 00:09:54.999 Starting namespace attribute notice tests for all controllers... 00:09:54.999 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:09:54.999 aer_cb - Changed Namespace 00:09:54.999 Cleaning up... 00:09:55.258 [ 00:09:55.258 { 00:09:55.258 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:55.258 "subtype": "Discovery", 00:09:55.258 "listen_addresses": [], 00:09:55.258 "allow_any_host": true, 00:09:55.258 "hosts": [] 00:09:55.258 }, 00:09:55.258 { 00:09:55.258 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:55.258 "subtype": "NVMe", 00:09:55.258 "listen_addresses": [ 00:09:55.258 { 00:09:55.258 "transport": "VFIOUSER", 00:09:55.258 "trtype": "VFIOUSER", 00:09:55.258 "adrfam": "IPv4", 00:09:55.258 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:55.258 "trsvcid": "0" 00:09:55.258 } 00:09:55.258 ], 00:09:55.258 "allow_any_host": true, 00:09:55.258 "hosts": [], 00:09:55.258 "serial_number": "SPDK1", 00:09:55.258 "model_number": "SPDK bdev Controller", 00:09:55.258 "max_namespaces": 32, 00:09:55.258 "min_cntlid": 1, 00:09:55.258 "max_cntlid": 65519, 00:09:55.258 "namespaces": [ 00:09:55.258 { 00:09:55.258 "nsid": 1, 00:09:55.258 "bdev_name": "Malloc1", 00:09:55.258 "name": "Malloc1", 00:09:55.258 "nguid": "F26ACF7AD7D24DBEAEB20BFEA970C77E", 00:09:55.258 "uuid": "f26acf7a-d7d2-4dbe-aeb2-0bfea970c77e" 00:09:55.258 }, 00:09:55.258 { 00:09:55.258 "nsid": 2, 00:09:55.258 "bdev_name": "Malloc3", 00:09:55.258 "name": "Malloc3", 00:09:55.258 "nguid": "E47C5D2890854DEFBE711CDD1703F1AB", 00:09:55.258 "uuid": "e47c5d28-9085-4def-be71-1cdd1703f1ab" 00:09:55.258 } 00:09:55.258 ] 00:09:55.258 }, 00:09:55.258 { 00:09:55.258 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:55.258 "subtype": "NVMe", 00:09:55.258 "listen_addresses": [ 00:09:55.258 { 00:09:55.258 "transport": "VFIOUSER", 00:09:55.258 "trtype": "VFIOUSER", 00:09:55.258 "adrfam": "IPv4", 00:09:55.258 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:55.258 "trsvcid": "0" 00:09:55.258 } 00:09:55.258 ], 00:09:55.258 "allow_any_host": true, 00:09:55.258 "hosts": [], 00:09:55.258 "serial_number": "SPDK2", 00:09:55.258 "model_number": "SPDK bdev Controller", 00:09:55.258 "max_namespaces": 32, 00:09:55.258 "min_cntlid": 1, 00:09:55.258 "max_cntlid": 65519, 00:09:55.258 "namespaces": [ 00:09:55.258 { 00:09:55.258 "nsid": 1, 00:09:55.258 "bdev_name": "Malloc2", 00:09:55.258 "name": "Malloc2", 00:09:55.258 "nguid": "8DAA769721EB4C4D9BD1B6777A34807E", 00:09:55.258 "uuid": "8daa7697-21eb-4c4d-9bd1-b6777a34807e" 00:09:55.258 } 00:09:55.258 ] 00:09:55.258 } 00:09:55.258 ] 00:09:55.258 14:14:36 -- target/nvmf_vfio_user.sh@44 -- # wait 3113263 00:09:55.258 14:14:36 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:55.258 14:14:36 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:09:55.258 14:14:36 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:09:55.258 14:14:36 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:55.258 [2024-04-26 14:14:36.724092] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:09:55.258 [2024-04-26 14:14:36.724141] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3113369 ] 00:09:55.258 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.258 [2024-04-26 14:14:36.765609] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:09:55.258 [2024-04-26 14:14:36.768967] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:55.258 [2024-04-26 14:14:36.769000] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fac685be000 00:09:55.258 [2024-04-26 14:14:36.769959] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:55.258 [2024-04-26 14:14:36.770967] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:55.258 [2024-04-26 14:14:36.771969] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:55.258 [2024-04-26 14:14:36.772981] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:55.258 [2024-04-26 14:14:36.773993] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:55.258 [2024-04-26 14:14:36.775005] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:55.258 [2024-04-26 14:14:36.776012] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:55.258 [2024-04-26 14:14:36.777013] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:55.258 [2024-04-26 14:14:36.778024] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:55.258 [2024-04-26 14:14:36.778051] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fac685b3000 00:09:55.258 [2024-04-26 14:14:36.779512] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:55.258 [2024-04-26 14:14:36.800532] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:09:55.258 [2024-04-26 14:14:36.800569] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:09:55.258 [2024-04-26 14:14:36.804709] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:55.258 [2024-04-26 14:14:36.804770] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:55.258 [2024-04-26 14:14:36.804875] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:09:55.258 [2024-04-26 14:14:36.804907] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:09:55.258 [2024-04-26 14:14:36.804919] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:09:55.258 [2024-04-26 14:14:36.805716] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:09:55.258 [2024-04-26 14:14:36.805738] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:09:55.258 [2024-04-26 14:14:36.805753] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:09:55.258 [2024-04-26 14:14:36.806718] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:55.258 [2024-04-26 14:14:36.806739] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:09:55.258 [2024-04-26 14:14:36.806755] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:09:55.258 [2024-04-26 14:14:36.807722] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:09:55.258 [2024-04-26 14:14:36.807743] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:55.258 [2024-04-26 14:14:36.808740] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:09:55.258 [2024-04-26 14:14:36.808767] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:09:55.258 [2024-04-26 14:14:36.808778] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:09:55.258 [2024-04-26 14:14:36.808792] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:55.258 [2024-04-26 14:14:36.808904] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:09:55.258 [2024-04-26 14:14:36.808913] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:55.258 [2024-04-26 14:14:36.808923] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:09:55.258 [2024-04-26 14:14:36.809743] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:09:55.258 [2024-04-26 14:14:36.810748] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:09:55.258 [2024-04-26 14:14:36.811756] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:55.258 [2024-04-26 14:14:36.812770] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:55.258 [2024-04-26 14:14:36.812846] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:55.258 [2024-04-26 14:14:36.813769] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:09:55.258 [2024-04-26 14:14:36.813796] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:55.258 [2024-04-26 14:14:36.813814] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:09:55.258 [2024-04-26 14:14:36.813849] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:09:55.258 [2024-04-26 14:14:36.813865] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:09:55.258 [2024-04-26 14:14:36.813892] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:55.258 [2024-04-26 14:14:36.813903] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:55.258 [2024-04-26 14:14:36.813928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:55.258 [2024-04-26 14:14:36.822645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:55.258 [2024-04-26 14:14:36.822670] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:09:55.258 [2024-04-26 14:14:36.822680] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:09:55.258 [2024-04-26 14:14:36.822689] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:09:55.258 [2024-04-26 14:14:36.822698] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:55.258 [2024-04-26 14:14:36.822707] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:09:55.258 [2024-04-26 14:14:36.822721] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:09:55.258 [2024-04-26 14:14:36.822731] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:09:55.258 [2024-04-26 14:14:36.822746] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:09:55.258 [2024-04-26 14:14:36.822764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:55.518 [2024-04-26 14:14:36.830655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:55.518 [2024-04-26 14:14:36.830692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:55.518 [2024-04-26 14:14:36.830710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:55.518 [2024-04-26 14:14:36.830725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:55.518 [2024-04-26 14:14:36.830740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:55.518 [2024-04-26 14:14:36.830750] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:09:55.518 [2024-04-26 14:14:36.830768] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:55.518 [2024-04-26 14:14:36.830785] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:55.518 [2024-04-26 14:14:36.838656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:55.518 [2024-04-26 14:14:36.838676] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:09:55.518 [2024-04-26 14:14:36.838687] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:55.518 [2024-04-26 14:14:36.838705] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:09:55.518 [2024-04-26 14:14:36.838717] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:09:55.518 [2024-04-26 14:14:36.838733] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:55.518 [2024-04-26 14:14:36.846645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:55.518 [2024-04-26 14:14:36.846721] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:09:55.518 [2024-04-26 14:14:36.846738] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:09:55.518 [2024-04-26 14:14:36.846754] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:55.518 [2024-04-26 14:14:36.846764] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:55.518 [2024-04-26 14:14:36.846775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:55.518 [2024-04-26 14:14:36.854649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:55.518 [2024-04-26 14:14:36.854687] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:09:55.518 [2024-04-26 14:14:36.854711] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:09:55.518 [2024-04-26 14:14:36.854729] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:09:55.518 [2024-04-26 14:14:36.854744] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:55.518 [2024-04-26 14:14:36.854753] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:55.518 [2024-04-26 14:14:36.854765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:55.518 [2024-04-26 14:14:36.862650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:55.518 [2024-04-26 14:14:36.862687] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:55.518 [2024-04-26 14:14:36.862704] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:55.518 [2024-04-26 14:14:36.862719] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:55.518 [2024-04-26 14:14:36.862729] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:55.518 [2024-04-26 14:14:36.862741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:55.518 [2024-04-26 14:14:36.870644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:55.518 [2024-04-26 14:14:36.870668] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:55.518 [2024-04-26 14:14:36.870682] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:09:55.518 [2024-04-26 14:14:36.870699] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:09:55.518 [2024-04-26 14:14:36.870712] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:55.518 [2024-04-26 14:14:36.870722] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:09:55.518 [2024-04-26 14:14:36.870731] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:09:55.518 [2024-04-26 14:14:36.870740] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:09:55.518 [2024-04-26 14:14:36.870750] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:09:55.518 [2024-04-26 14:14:36.870777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:55.518 [2024-04-26 14:14:36.878648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:55.518 [2024-04-26 14:14:36.878676] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:55.518 [2024-04-26 14:14:36.886640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:55.518 [2024-04-26 14:14:36.886667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:55.518 [2024-04-26 14:14:36.894651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:55.518 [2024-04-26 14:14:36.894677] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:55.518 [2024-04-26 14:14:36.900694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:55.518 [2024-04-26 14:14:36.900724] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:55.518 [2024-04-26 14:14:36.900735] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:55.518 [2024-04-26 14:14:36.900742] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:55.518 [2024-04-26 14:14:36.900750] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:55.518 [2024-04-26 14:14:36.900761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:55.518 [2024-04-26 14:14:36.900775] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:55.518 [2024-04-26 14:14:36.900785] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:55.518 [2024-04-26 14:14:36.900796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:55.518 [2024-04-26 14:14:36.900809] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:55.518 [2024-04-26 14:14:36.900818] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:55.519 [2024-04-26 14:14:36.900829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:55.519 [2024-04-26 14:14:36.900842] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:55.519 [2024-04-26 14:14:36.900852] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:55.519 [2024-04-26 14:14:36.900863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:55.519 [2024-04-26 14:14:36.910640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:55.519 [2024-04-26 14:14:36.910680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:55.519 [2024-04-26 14:14:36.910699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:55.519 [2024-04-26 14:14:36.910713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:55.519 ===================================================== 00:09:55.519 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:55.519 ===================================================== 00:09:55.519 Controller Capabilities/Features 00:09:55.519 ================================ 00:09:55.519 Vendor ID: 4e58 00:09:55.519 Subsystem Vendor ID: 4e58 00:09:55.519 Serial Number: SPDK2 00:09:55.519 Model Number: SPDK bdev Controller 00:09:55.519 Firmware Version: 24.05 00:09:55.519 Recommended Arb Burst: 6 00:09:55.519 IEEE OUI Identifier: 8d 6b 50 00:09:55.519 Multi-path I/O 00:09:55.519 May have multiple subsystem ports: Yes 00:09:55.519 May have multiple controllers: Yes 00:09:55.519 Associated with SR-IOV VF: No 00:09:55.519 Max Data Transfer Size: 131072 00:09:55.519 Max Number of Namespaces: 32 00:09:55.519 Max Number of I/O Queues: 127 00:09:55.519 NVMe Specification Version (VS): 1.3 00:09:55.519 NVMe Specification Version (Identify): 1.3 00:09:55.519 Maximum Queue Entries: 256 00:09:55.519 Contiguous Queues Required: Yes 00:09:55.519 Arbitration Mechanisms Supported 00:09:55.519 Weighted Round Robin: Not Supported 00:09:55.519 Vendor Specific: Not Supported 00:09:55.519 Reset Timeout: 15000 ms 00:09:55.519 Doorbell Stride: 4 bytes 00:09:55.519 NVM Subsystem Reset: Not Supported 00:09:55.519 Command Sets Supported 00:09:55.519 NVM Command Set: Supported 00:09:55.519 Boot Partition: Not Supported 00:09:55.519 Memory Page Size Minimum: 4096 bytes 00:09:55.519 Memory Page Size Maximum: 4096 bytes 00:09:55.519 Persistent Memory Region: Not Supported 00:09:55.519 Optional Asynchronous Events Supported 00:09:55.519 Namespace Attribute Notices: Supported 00:09:55.519 Firmware Activation Notices: Not Supported 00:09:55.519 ANA Change Notices: Not Supported 00:09:55.519 PLE Aggregate Log Change Notices: Not Supported 00:09:55.519 LBA Status Info Alert Notices: Not Supported 00:09:55.519 EGE Aggregate Log Change Notices: Not Supported 00:09:55.519 Normal NVM Subsystem Shutdown event: Not Supported 00:09:55.519 Zone Descriptor Change Notices: Not Supported 00:09:55.519 Discovery Log Change Notices: Not Supported 00:09:55.519 Controller Attributes 00:09:55.519 128-bit Host Identifier: Supported 00:09:55.519 Non-Operational Permissive Mode: Not Supported 00:09:55.519 NVM Sets: Not Supported 00:09:55.519 Read Recovery Levels: Not Supported 00:09:55.519 Endurance Groups: Not Supported 00:09:55.519 Predictable Latency Mode: Not Supported 00:09:55.519 Traffic Based Keep ALive: Not Supported 00:09:55.519 Namespace Granularity: Not Supported 00:09:55.519 SQ Associations: Not Supported 00:09:55.519 UUID List: Not Supported 00:09:55.519 Multi-Domain Subsystem: Not Supported 00:09:55.519 Fixed Capacity Management: Not Supported 00:09:55.519 Variable Capacity Management: Not Supported 00:09:55.519 Delete Endurance Group: Not Supported 00:09:55.519 Delete NVM Set: Not Supported 00:09:55.519 Extended LBA Formats Supported: Not Supported 00:09:55.519 Flexible Data Placement Supported: Not Supported 00:09:55.519 00:09:55.519 Controller Memory Buffer Support 00:09:55.519 ================================ 00:09:55.519 Supported: No 00:09:55.519 00:09:55.519 Persistent Memory Region Support 00:09:55.519 ================================ 00:09:55.519 Supported: No 00:09:55.519 00:09:55.519 Admin Command Set Attributes 00:09:55.519 ============================ 00:09:55.519 Security Send/Receive: Not Supported 00:09:55.519 Format NVM: Not Supported 00:09:55.519 Firmware Activate/Download: Not Supported 00:09:55.519 Namespace Management: Not Supported 00:09:55.519 Device Self-Test: Not Supported 00:09:55.519 Directives: Not Supported 00:09:55.519 NVMe-MI: Not Supported 00:09:55.519 Virtualization Management: Not Supported 00:09:55.519 Doorbell Buffer Config: Not Supported 00:09:55.519 Get LBA Status Capability: Not Supported 00:09:55.519 Command & Feature Lockdown Capability: Not Supported 00:09:55.519 Abort Command Limit: 4 00:09:55.519 Async Event Request Limit: 4 00:09:55.519 Number of Firmware Slots: N/A 00:09:55.519 Firmware Slot 1 Read-Only: N/A 00:09:55.519 Firmware Activation Without Reset: N/A 00:09:55.519 Multiple Update Detection Support: N/A 00:09:55.519 Firmware Update Granularity: No Information Provided 00:09:55.519 Per-Namespace SMART Log: No 00:09:55.519 Asymmetric Namespace Access Log Page: Not Supported 00:09:55.519 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:09:55.519 Command Effects Log Page: Supported 00:09:55.519 Get Log Page Extended Data: Supported 00:09:55.519 Telemetry Log Pages: Not Supported 00:09:55.519 Persistent Event Log Pages: Not Supported 00:09:55.519 Supported Log Pages Log Page: May Support 00:09:55.519 Commands Supported & Effects Log Page: Not Supported 00:09:55.519 Feature Identifiers & Effects Log Page:May Support 00:09:55.519 NVMe-MI Commands & Effects Log Page: May Support 00:09:55.519 Data Area 4 for Telemetry Log: Not Supported 00:09:55.519 Error Log Page Entries Supported: 128 00:09:55.519 Keep Alive: Supported 00:09:55.519 Keep Alive Granularity: 10000 ms 00:09:55.519 00:09:55.519 NVM Command Set Attributes 00:09:55.519 ========================== 00:09:55.519 Submission Queue Entry Size 00:09:55.519 Max: 64 00:09:55.519 Min: 64 00:09:55.519 Completion Queue Entry Size 00:09:55.519 Max: 16 00:09:55.519 Min: 16 00:09:55.519 Number of Namespaces: 32 00:09:55.519 Compare Command: Supported 00:09:55.519 Write Uncorrectable Command: Not Supported 00:09:55.519 Dataset Management Command: Supported 00:09:55.519 Write Zeroes Command: Supported 00:09:55.519 Set Features Save Field: Not Supported 00:09:55.519 Reservations: Not Supported 00:09:55.519 Timestamp: Not Supported 00:09:55.519 Copy: Supported 00:09:55.519 Volatile Write Cache: Present 00:09:55.519 Atomic Write Unit (Normal): 1 00:09:55.519 Atomic Write Unit (PFail): 1 00:09:55.519 Atomic Compare & Write Unit: 1 00:09:55.519 Fused Compare & Write: Supported 00:09:55.519 Scatter-Gather List 00:09:55.519 SGL Command Set: Supported (Dword aligned) 00:09:55.519 SGL Keyed: Not Supported 00:09:55.519 SGL Bit Bucket Descriptor: Not Supported 00:09:55.519 SGL Metadata Pointer: Not Supported 00:09:55.519 Oversized SGL: Not Supported 00:09:55.519 SGL Metadata Address: Not Supported 00:09:55.519 SGL Offset: Not Supported 00:09:55.519 Transport SGL Data Block: Not Supported 00:09:55.519 Replay Protected Memory Block: Not Supported 00:09:55.519 00:09:55.519 Firmware Slot Information 00:09:55.519 ========================= 00:09:55.519 Active slot: 1 00:09:55.519 Slot 1 Firmware Revision: 24.05 00:09:55.519 00:09:55.519 00:09:55.519 Commands Supported and Effects 00:09:55.519 ============================== 00:09:55.519 Admin Commands 00:09:55.519 -------------- 00:09:55.519 Get Log Page (02h): Supported 00:09:55.519 Identify (06h): Supported 00:09:55.519 Abort (08h): Supported 00:09:55.519 Set Features (09h): Supported 00:09:55.519 Get Features (0Ah): Supported 00:09:55.519 Asynchronous Event Request (0Ch): Supported 00:09:55.519 Keep Alive (18h): Supported 00:09:55.519 I/O Commands 00:09:55.519 ------------ 00:09:55.519 Flush (00h): Supported LBA-Change 00:09:55.519 Write (01h): Supported LBA-Change 00:09:55.519 Read (02h): Supported 00:09:55.519 Compare (05h): Supported 00:09:55.519 Write Zeroes (08h): Supported LBA-Change 00:09:55.519 Dataset Management (09h): Supported LBA-Change 00:09:55.519 Copy (19h): Supported LBA-Change 00:09:55.519 Unknown (79h): Supported LBA-Change 00:09:55.519 Unknown (7Ah): Supported 00:09:55.519 00:09:55.519 Error Log 00:09:55.519 ========= 00:09:55.519 00:09:55.519 Arbitration 00:09:55.519 =========== 00:09:55.519 Arbitration Burst: 1 00:09:55.519 00:09:55.519 Power Management 00:09:55.519 ================ 00:09:55.519 Number of Power States: 1 00:09:55.519 Current Power State: Power State #0 00:09:55.519 Power State #0: 00:09:55.519 Max Power: 0.00 W 00:09:55.519 Non-Operational State: Operational 00:09:55.519 Entry Latency: Not Reported 00:09:55.519 Exit Latency: Not Reported 00:09:55.519 Relative Read Throughput: 0 00:09:55.519 Relative Read Latency: 0 00:09:55.519 Relative Write Throughput: 0 00:09:55.519 Relative Write Latency: 0 00:09:55.519 Idle Power: Not Reported 00:09:55.519 Active Power: Not Reported 00:09:55.519 Non-Operational Permissive Mode: Not Supported 00:09:55.519 00:09:55.519 Health Information 00:09:55.519 ================== 00:09:55.519 Critical Warnings: 00:09:55.519 Available Spare Space: OK 00:09:55.519 Temperature: OK 00:09:55.519 Device Reliability: OK 00:09:55.519 Read Only: No 00:09:55.519 Volatile Memory Backup: OK 00:09:55.519 Current Temperature: 0 Kelvin (-2[2024-04-26 14:14:36.910855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:55.519 [2024-04-26 14:14:36.918644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:55.519 [2024-04-26 14:14:36.918694] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:09:55.519 [2024-04-26 14:14:36.918714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:55.519 [2024-04-26 14:14:36.918726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:55.519 [2024-04-26 14:14:36.918738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:55.519 [2024-04-26 14:14:36.918755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:55.519 [2024-04-26 14:14:36.918843] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:55.519 [2024-04-26 14:14:36.918867] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:09:55.519 [2024-04-26 14:14:36.919859] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:55.519 [2024-04-26 14:14:36.919938] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:09:55.519 [2024-04-26 14:14:36.919954] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:09:55.519 [2024-04-26 14:14:36.920857] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:09:55.519 [2024-04-26 14:14:36.920883] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:09:55.519 [2024-04-26 14:14:36.920968] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:09:55.519 [2024-04-26 14:14:36.922510] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:55.519 73 Celsius) 00:09:55.519 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:55.519 Available Spare: 0% 00:09:55.519 Available Spare Threshold: 0% 00:09:55.519 Life Percentage Used: 0% 00:09:55.519 Data Units Read: 0 00:09:55.519 Data Units Written: 0 00:09:55.519 Host Read Commands: 0 00:09:55.519 Host Write Commands: 0 00:09:55.519 Controller Busy Time: 0 minutes 00:09:55.519 Power Cycles: 0 00:09:55.519 Power On Hours: 0 hours 00:09:55.519 Unsafe Shutdowns: 0 00:09:55.519 Unrecoverable Media Errors: 0 00:09:55.519 Lifetime Error Log Entries: 0 00:09:55.519 Warning Temperature Time: 0 minutes 00:09:55.519 Critical Temperature Time: 0 minutes 00:09:55.519 00:09:55.519 Number of Queues 00:09:55.519 ================ 00:09:55.519 Number of I/O Submission Queues: 127 00:09:55.519 Number of I/O Completion Queues: 127 00:09:55.519 00:09:55.519 Active Namespaces 00:09:55.519 ================= 00:09:55.519 Namespace ID:1 00:09:55.519 Error Recovery Timeout: Unlimited 00:09:55.519 Command Set Identifier: NVM (00h) 00:09:55.519 Deallocate: Supported 00:09:55.519 Deallocated/Unwritten Error: Not Supported 00:09:55.519 Deallocated Read Value: Unknown 00:09:55.519 Deallocate in Write Zeroes: Not Supported 00:09:55.519 Deallocated Guard Field: 0xFFFF 00:09:55.519 Flush: Supported 00:09:55.519 Reservation: Supported 00:09:55.519 Namespace Sharing Capabilities: Multiple Controllers 00:09:55.519 Size (in LBAs): 131072 (0GiB) 00:09:55.519 Capacity (in LBAs): 131072 (0GiB) 00:09:55.519 Utilization (in LBAs): 131072 (0GiB) 00:09:55.519 NGUID: 8DAA769721EB4C4D9BD1B6777A34807E 00:09:55.520 UUID: 8daa7697-21eb-4c4d-9bd1-b6777a34807e 00:09:55.520 Thin Provisioning: Not Supported 00:09:55.520 Per-NS Atomic Units: Yes 00:09:55.520 Atomic Boundary Size (Normal): 0 00:09:55.520 Atomic Boundary Size (PFail): 0 00:09:55.520 Atomic Boundary Offset: 0 00:09:55.520 Maximum Single Source Range Length: 65535 00:09:55.520 Maximum Copy Length: 65535 00:09:55.520 Maximum Source Range Count: 1 00:09:55.520 NGUID/EUI64 Never Reused: No 00:09:55.520 Namespace Write Protected: No 00:09:55.520 Number of LBA Formats: 1 00:09:55.520 Current LBA Format: LBA Format #00 00:09:55.520 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:55.520 00:09:55.520 14:14:36 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:55.520 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.777 [2024-04-26 14:14:37.151049] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:01.043 [2024-04-26 14:14:42.257908] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:01.043 Initializing NVMe Controllers 00:10:01.043 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:01.043 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:01.043 Initialization complete. Launching workers. 00:10:01.043 ======================================================== 00:10:01.043 Latency(us) 00:10:01.043 Device Information : IOPS MiB/s Average min max 00:10:01.043 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24119.40 94.22 5311.33 1489.58 8579.67 00:10:01.043 ======================================================== 00:10:01.043 Total : 24119.40 94.22 5311.33 1489.58 8579.67 00:10:01.043 00:10:01.043 14:14:42 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:01.043 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.043 [2024-04-26 14:14:42.495629] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:06.309 [2024-04-26 14:14:47.516738] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:06.309 Initializing NVMe Controllers 00:10:06.309 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:06.309 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:06.309 Initialization complete. Launching workers. 00:10:06.309 ======================================================== 00:10:06.309 Latency(us) 00:10:06.309 Device Information : IOPS MiB/s Average min max 00:10:06.309 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24062.17 93.99 5319.47 1466.11 10544.10 00:10:06.309 ======================================================== 00:10:06.309 Total : 24062.17 93.99 5319.47 1466.11 10544.10 00:10:06.309 00:10:06.309 14:14:47 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:06.309 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.309 [2024-04-26 14:14:47.749325] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:11.580 [2024-04-26 14:14:52.886790] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:11.580 Initializing NVMe Controllers 00:10:11.580 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:11.580 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:11.580 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:11.580 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:11.580 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:11.580 Initialization complete. Launching workers. 00:10:11.580 Starting thread on core 2 00:10:11.580 Starting thread on core 3 00:10:11.580 Starting thread on core 1 00:10:11.580 14:14:52 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:11.580 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.838 [2024-04-26 14:14:53.188236] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:15.124 [2024-04-26 14:14:56.242481] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:15.124 Initializing NVMe Controllers 00:10:15.124 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:15.124 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:15.124 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:15.124 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:15.124 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:15.124 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:15.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:15.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:15.124 Initialization complete. Launching workers. 00:10:15.124 Starting thread on core 1 with urgent priority queue 00:10:15.124 Starting thread on core 2 with urgent priority queue 00:10:15.124 Starting thread on core 3 with urgent priority queue 00:10:15.124 Starting thread on core 0 with urgent priority queue 00:10:15.124 SPDK bdev Controller (SPDK2 ) core 0: 7651.67 IO/s 13.07 secs/100000 ios 00:10:15.124 SPDK bdev Controller (SPDK2 ) core 1: 7984.67 IO/s 12.52 secs/100000 ios 00:10:15.124 SPDK bdev Controller (SPDK2 ) core 2: 7120.00 IO/s 14.04 secs/100000 ios 00:10:15.124 SPDK bdev Controller (SPDK2 ) core 3: 6637.33 IO/s 15.07 secs/100000 ios 00:10:15.124 ======================================================== 00:10:15.124 00:10:15.124 14:14:56 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:15.124 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.124 [2024-04-26 14:14:56.537255] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:15.124 [2024-04-26 14:14:56.547374] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:15.124 Initializing NVMe Controllers 00:10:15.124 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:15.124 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:15.124 Namespace ID: 1 size: 0GB 00:10:15.124 Initialization complete. 00:10:15.124 INFO: using host memory buffer for IO 00:10:15.124 Hello world! 00:10:15.124 14:14:56 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:15.125 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.383 [2024-04-26 14:14:56.830799] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:16.758 Initializing NVMe Controllers 00:10:16.758 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:16.758 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:16.758 Initialization complete. Launching workers. 00:10:16.758 submit (in ns) avg, min, max = 9852.2, 4454.8, 4013915.6 00:10:16.758 complete (in ns) avg, min, max = 28535.5, 2610.4, 7001309.6 00:10:16.758 00:10:16.758 Submit histogram 00:10:16.758 ================ 00:10:16.758 Range in us Cumulative Count 00:10:16.758 4.433 - 4.456: 0.0084% ( 1) 00:10:16.758 4.456 - 4.480: 0.1178% ( 13) 00:10:16.758 4.480 - 4.504: 1.0519% ( 111) 00:10:16.758 4.504 - 4.527: 3.4166% ( 281) 00:10:16.758 4.527 - 4.551: 7.1867% ( 448) 00:10:16.758 4.551 - 4.575: 12.4127% ( 621) 00:10:16.758 4.575 - 4.599: 16.5783% ( 495) 00:10:16.758 4.599 - 4.622: 18.6485% ( 246) 00:10:16.758 4.622 - 4.646: 19.5826% ( 111) 00:10:16.758 4.646 - 4.670: 20.3989% ( 97) 00:10:16.758 4.670 - 4.693: 21.8295% ( 170) 00:10:16.758 4.693 - 4.717: 24.6571% ( 336) 00:10:16.758 4.717 - 4.741: 29.1845% ( 538) 00:10:16.758 4.741 - 4.764: 34.6377% ( 648) 00:10:16.758 4.764 - 4.788: 37.9113% ( 389) 00:10:16.758 4.788 - 4.812: 39.0137% ( 131) 00:10:16.758 4.812 - 4.836: 39.7627% ( 89) 00:10:16.758 4.836 - 4.859: 40.1750% ( 49) 00:10:16.758 4.859 - 4.883: 40.6884% ( 61) 00:10:16.758 4.883 - 4.907: 41.2690% ( 69) 00:10:16.758 4.907 - 4.930: 41.7824% ( 61) 00:10:16.758 4.930 - 4.954: 42.3041% ( 62) 00:10:16.758 4.954 - 4.978: 42.6239% ( 38) 00:10:16.758 4.978 - 5.001: 42.8259% ( 24) 00:10:16.758 5.001 - 5.025: 43.0363% ( 25) 00:10:16.758 5.025 - 5.049: 43.0952% ( 7) 00:10:16.758 5.049 - 5.073: 43.1625% ( 8) 00:10:16.758 5.073 - 5.096: 43.3308% ( 20) 00:10:16.758 5.096 - 5.120: 43.7852% ( 54) 00:10:16.758 5.120 - 5.144: 46.0153% ( 265) 00:10:16.758 5.144 - 5.167: 49.9621% ( 469) 00:10:16.758 5.167 - 5.191: 56.8964% ( 824) 00:10:16.758 5.191 - 5.215: 59.7156% ( 335) 00:10:16.758 5.215 - 5.239: 61.0115% ( 154) 00:10:16.758 5.239 - 5.262: 62.2907% ( 152) 00:10:16.758 5.262 - 5.286: 63.7718% ( 176) 00:10:16.758 5.286 - 5.310: 66.2543% ( 295) 00:10:16.758 5.310 - 5.333: 71.3120% ( 601) 00:10:16.758 5.333 - 5.357: 73.9291% ( 311) 00:10:16.758 5.357 - 5.381: 75.3850% ( 173) 00:10:16.758 5.381 - 5.404: 76.5632% ( 140) 00:10:16.758 5.404 - 5.428: 78.3977% ( 218) 00:10:16.758 5.428 - 5.452: 78.9363% ( 64) 00:10:16.758 5.452 - 5.476: 79.1383% ( 24) 00:10:16.758 5.476 - 5.499: 79.3739% ( 28) 00:10:16.758 5.499 - 5.523: 80.8129% ( 171) 00:10:16.758 5.523 - 5.547: 83.7667% ( 351) 00:10:16.758 5.547 - 5.570: 90.6337% ( 816) 00:10:16.758 5.570 - 5.594: 92.7123% ( 247) 00:10:16.758 5.594 - 5.618: 93.8231% ( 132) 00:10:16.758 5.618 - 5.641: 94.3196% ( 59) 00:10:16.758 5.641 - 5.665: 94.6562% ( 40) 00:10:16.758 5.665 - 5.689: 94.8245% ( 20) 00:10:16.758 5.689 - 5.713: 94.9424% ( 14) 00:10:16.758 5.713 - 5.736: 95.0349% ( 11) 00:10:16.758 5.736 - 5.760: 95.0938% ( 7) 00:10:16.758 5.760 - 5.784: 95.2116% ( 14) 00:10:16.758 5.784 - 5.807: 95.3042% ( 11) 00:10:16.758 5.807 - 5.831: 95.4557% ( 18) 00:10:16.758 5.831 - 5.855: 95.6829% ( 27) 00:10:16.758 5.855 - 5.879: 95.7502% ( 8) 00:10:16.758 5.879 - 5.902: 95.8091% ( 7) 00:10:16.758 5.902 - 5.926: 95.9774% ( 20) 00:10:16.758 5.926 - 5.950: 96.0448% ( 8) 00:10:16.758 5.950 - 5.973: 96.1373% ( 11) 00:10:16.758 5.973 - 5.997: 96.2636% ( 15) 00:10:16.758 5.997 - 6.021: 96.4066% ( 17) 00:10:16.758 6.021 - 6.044: 96.4571% ( 6) 00:10:16.758 6.044 - 6.068: 96.5329% ( 9) 00:10:16.758 6.068 - 6.116: 96.8442% ( 37) 00:10:16.758 6.116 - 6.163: 96.9536% ( 13) 00:10:16.758 6.163 - 6.210: 97.0883% ( 16) 00:10:16.758 6.210 - 6.258: 97.4838% ( 47) 00:10:16.758 6.258 - 6.305: 97.6942% ( 25) 00:10:16.758 6.305 - 6.353: 97.8709% ( 21) 00:10:16.758 6.353 - 6.400: 97.8877% ( 2) 00:10:16.758 6.400 - 6.447: 97.9803% ( 11) 00:10:16.758 6.447 - 6.495: 98.1234% ( 17) 00:10:16.758 6.495 - 6.542: 98.1739% ( 6) 00:10:16.758 6.542 - 6.590: 98.1991% ( 3) 00:10:16.758 6.590 - 6.637: 98.2833% ( 10) 00:10:16.758 6.637 - 6.684: 98.3590% ( 9) 00:10:16.758 6.684 - 6.732: 98.3842% ( 3) 00:10:16.758 6.732 - 6.779: 98.4263% ( 5) 00:10:16.758 6.779 - 6.827: 98.4432% ( 2) 00:10:16.758 6.827 - 6.874: 98.6367% ( 23) 00:10:16.758 6.874 - 6.921: 98.9481% ( 37) 00:10:16.758 6.921 - 6.969: 99.0238% ( 9) 00:10:16.758 6.969 - 7.016: 99.0659% ( 5) 00:10:16.758 7.016 - 7.064: 99.0743% ( 1) 00:10:16.758 7.159 - 7.206: 99.0827% ( 1) 00:10:16.758 7.538 - 7.585: 99.0996% ( 2) 00:10:16.758 7.585 - 7.633: 99.1080% ( 1) 00:10:16.758 7.964 - 8.012: 99.1164% ( 1) 00:10:16.758 8.391 - 8.439: 99.1248% ( 1) 00:10:16.758 8.439 - 8.486: 99.1332% ( 1) 00:10:16.758 8.533 - 8.581: 99.1416% ( 1) 00:10:16.758 8.723 - 8.770: 99.1585% ( 2) 00:10:16.758 8.770 - 8.818: 99.1669% ( 1) 00:10:16.758 8.865 - 8.913: 99.1753% ( 1) 00:10:16.758 8.913 - 8.960: 99.1837% ( 1) 00:10:16.758 8.960 - 9.007: 99.1921% ( 1) 00:10:16.758 9.007 - 9.055: 99.2005% ( 1) 00:10:16.758 9.055 - 9.102: 99.2174% ( 2) 00:10:16.758 9.102 - 9.150: 99.2342% ( 2) 00:10:16.758 9.150 - 9.197: 99.2426% ( 1) 00:10:16.758 9.244 - 9.292: 99.2594% ( 2) 00:10:16.758 9.292 - 9.339: 99.2679% ( 1) 00:10:16.758 9.339 - 9.387: 99.2763% ( 1) 00:10:16.758 9.387 - 9.434: 99.2847% ( 1) 00:10:16.758 9.481 - 9.529: 99.2931% ( 1) 00:10:16.758 9.529 - 9.576: 99.3099% ( 2) 00:10:16.758 9.624 - 9.671: 99.3184% ( 1) 00:10:16.758 9.671 - 9.719: 99.3268% ( 1) 00:10:16.758 9.719 - 9.766: 99.3436% ( 2) 00:10:16.758 9.766 - 9.813: 99.3604% ( 2) 00:10:16.758 9.813 - 9.861: 99.3688% ( 1) 00:10:16.758 9.861 - 9.908: 99.3773% ( 1) 00:10:16.758 9.908 - 9.956: 99.3941% ( 2) 00:10:16.758 9.956 - 10.003: 99.4025% ( 1) 00:10:16.758 10.050 - 10.098: 99.4193% ( 2) 00:10:16.758 10.098 - 10.145: 99.4362% ( 2) 00:10:16.758 10.145 - 10.193: 99.4446% ( 1) 00:10:16.758 10.193 - 10.240: 99.4530% ( 1) 00:10:16.758 10.240 - 10.287: 99.4951% ( 5) 00:10:16.758 10.287 - 10.335: 99.5119% ( 2) 00:10:16.758 10.335 - 10.382: 99.5287% ( 2) 00:10:16.758 10.477 - 10.524: 99.5372% ( 1) 00:10:16.758 10.524 - 10.572: 99.5456% ( 1) 00:10:16.758 10.572 - 10.619: 99.5540% ( 1) 00:10:16.758 10.619 - 10.667: 99.5708% ( 2) 00:10:16.758 10.714 - 10.761: 99.5792% ( 1) 00:10:16.758 10.761 - 10.809: 99.5876% ( 1) 00:10:16.758 10.809 - 10.856: 99.5961% ( 1) 00:10:16.758 10.856 - 10.904: 99.6045% ( 1) 00:10:16.758 11.188 - 11.236: 99.6213% ( 2) 00:10:16.758 11.236 - 11.283: 99.6297% ( 1) 00:10:16.758 11.330 - 11.378: 99.6381% ( 1) 00:10:16.758 11.425 - 11.473: 99.6466% ( 1) 00:10:16.758 11.473 - 11.520: 99.6550% ( 1) 00:10:16.758 11.520 - 11.567: 99.6634% ( 1) 00:10:16.758 11.757 - 11.804: 99.6718% ( 1) 00:10:16.758 11.852 - 11.899: 99.6802% ( 1) 00:10:16.758 11.947 - 11.994: 99.6886% ( 1) 00:10:16.758 12.231 - 12.326: 99.6970% ( 1) 00:10:16.758 12.421 - 12.516: 99.7223% ( 3) 00:10:16.758 12.516 - 12.610: 99.7307% ( 1) 00:10:16.758 13.084 - 13.179: 99.7475% ( 2) 00:10:16.758 13.274 - 13.369: 99.7644% ( 2) 00:10:16.758 13.369 - 13.464: 99.7812% ( 2) 00:10:16.758 13.464 - 13.559: 99.7896% ( 1) 00:10:16.758 13.559 - 13.653: 99.8064% ( 2) 00:10:16.758 13.748 - 13.843: 99.8149% ( 1) 00:10:16.758 13.938 - 14.033: 99.8233% ( 1) 00:10:16.758 14.317 - 14.412: 99.8401% ( 2) 00:10:16.758 14.507 - 14.601: 99.8485% ( 1) 00:10:16.758 15.550 - 15.644: 99.8569% ( 1) 00:10:16.758 17.920 - 18.015: 99.8654% ( 1) 00:10:16.758 19.816 - 19.911: 99.8738% ( 1) 00:10:16.758 21.144 - 21.239: 99.8822% ( 1) 00:10:16.758 3980.705 - 4004.978: 99.9327% ( 6) 00:10:16.758 4004.978 - 4029.250: 100.0000% ( 8) 00:10:16.758 00:10:16.758 Complete histogram 00:10:16.758 ================== 00:10:16.758 Range in us Cumulative Count 00:10:16.758 2.607 - 2.619: 0.2440% ( 29) 00:10:16.758 2.619 - 2.631: 6.8754% ( 788) 00:10:16.758 2.631 - 2.643: 18.1856% ( 1344) 00:10:16.758 2.643 - 2.655: 21.6612% ( 413) 00:10:16.758 2.655 - 2.667: 31.7176% ( 1195) 00:10:16.758 2.667 - 2.679: 67.5335% ( 4256) 00:10:16.758 2.679 - 2.690: 86.5354% ( 2258) 00:10:16.758 2.690 - 2.702: 93.1499% ( 786) 00:10:16.758 2.702 - 2.714: 95.0602% ( 227) 00:10:16.758 2.714 - 2.726: 95.7166% ( 78) 00:10:16.758 2.726 - 2.738: 96.3056% ( 70) 00:10:16.758 2.738 - 2.750: 96.7853% ( 57) 00:10:16.758 2.750 - 2.761: 97.2229% ( 52) 00:10:16.758 2.761 - 2.773: 97.6100% ( 46) 00:10:16.758 2.773 - 2.785: 97.9382% ( 39) 00:10:16.758 2.785 - 2.797: 98.0729% ( 16) 00:10:16.758 2.797 - 2.809: 98.1654% ( 11) 00:10:16.758 2.809 - 2.821: 98.1907% ( 3) 00:10:16.758 2.821 - 2.833: 98.2075% ( 2) 00:10:16.758 2.856 - 2.868: 98.2159% ( 1) 00:10:16.758 2.880 - 2.892: 98.2244% ( 1) 00:10:16.759 2.892 - 2.904: 98.2412% ( 2) 00:10:16.759 2.904 - 2.916: 98.2496% ( 1) 00:10:16.759 2.916 - 2.927: 98.2833% ( 4) 00:10:16.759 2.939 - 2.951: 98.2917% ( 1) 00:10:16.759 2.963 - 2.975: 98.3085% ( 2) 00:10:16.759 2.975 - 2.987: 98.3253% ( 2) 00:10:16.759 2.987 - 2.999: 98.3422% ( 2) 00:10:16.759 2.999 - 3.010: 98.3506% ( 1) 00:10:16.759 3.022 - 3.034: 98.3674% ( 2) 00:10:16.759 3.034 - 3.058: 98.3758% ( 1) 00:10:16.759 3.081 - 3.105: 98.3927% ( 2) 00:10:16.759 3.105 - 3.129: 98.4095% ( 2) 00:10:16.759 3.129 - 3.153: 98.4179% ( 1) 00:10:16.759 3.153 - 3.176: 98.4347% ( 2) 00:10:16.759 3.176 - 3.200: 98.4600% ( 3) 00:10:16.759 3.200 - 3.224: 98.4684% ( 1) 00:10:16.759 3.247 - 3.271: 98.4768% ( 1) 00:10:16.759 3.271 - 3.295: 98.4936% ( 2) 00:10:16.759 3.295 - 3.319: 98.5273% ( 4) 00:10:16.759 3.319 - 3.342: 98.5441% ( 2) 00:10:16.759 3.342 - 3.366: 98.5610% ( 2) 00:10:16.759 3.366 - 3.390: 98.5946% ( 4) 00:10:16.759 3.390 - 3.413: 98.6451% ( 6) 00:10:16.759 3.413 - 3.437: 98.6620% ( 2) 00:10:16.759 3.437 - 3.461: 98.6956% ( 4) 00:10:16.759 3.461 - 3.484: 98.7377% ( 5) 00:10:16.759 3.484 - 3.508: 98.7545% ( 2) 00:10:16.759 3.508 - 3.532: 98.8134% ( 7) 00:10:16.759 3.532 - 3.556: 98.8471% ( 4) 00:10:16.759 3.556 - 3.579: 98.8808% ( 4) 00:10:16.759 3.579 - 3.603: 98.9228% ( 5) 00:10:16.759 3.603 - 3.627: 98.9481% ( 3) 00:10:16.759 3.627 - 3.650: 98.9649% ( 2) 00:10:16.759 3.650 - 3.674: 98.9817% ( 2) 00:10:16.759 3.674 - 3.698: 98.9986% ( 2) 00:10:16.759 3.721 - 3.745: 99.0070% ( 1) 00:10:16.759 3.745 - 3.769: 99.0238% ( 2) 00:10:16.759 3.840 - 3.864: 99.0406% ( 2) 00:10:16.759 3.935 - 3.959: 99.0491% ( 1) 00:10:16.759 4.030 - 4.053: 99.0575% ( 1) 00:10:16.759 4.409 - 4.433: 99.0659% ( 1) 00:10:16.759 4.575 - 4.599: 99.0743% ( 1) 00:10:16.759 4.741 - 4.764: 99.0827% ( 1) 00:10:16.759 4.836 - 4.859: 99.0911% ( 1) 00:10:16.759 6.495 - 6.542: 99.0996% ( 1) 00:10:16.759 6.542 - 6.590: 99.1164% ( 2) 00:10:16.759 6.590 - 6.637: 99.1248% ( 1) 00:10:16.759 6.637 - 6.684: 99.1332% ( 1) 00:10:16.759 6.779 - 6.827: 99.1416% ( 1) 00:10:16.759 6.969 - 7.016: 99.1500% ( 1) 00:10:16.759 7.111 - 7.159: 99.1585% ( 1) 00:10:16.759 7.159 - 7.206: 99.1753% ( 2) 00:10:16.759 7.253 - 7.301: 99.1837% ( 1) 00:10:16.759 7.538 - 7.585: 99.1921% ( 1) 00:10:16.759 7.633 - 7.680: 99.2090% ( 2) 00:10:16.759 7.727 - 7.775: 99.2174% ( 1) 00:10:16.759 7.822 - 7.870: 99.2258% ( 1) 00:10:16.759 8.107 - 8.154: 99.2342% ( 1) 00:10:16.759 8.201 - 8.249: 99.2426% ( 1) 00:10:16.759 8.439 - 8.486: 99.2510% ( 1) 00:10:16.759 8.770 - 8.818: 99.2763% ( 3) 00:10:16.759 9.007 - 9.055: 99.2847% ( 1) 00:10:16.759 9.292 - 9.339: 99.2931% ( 1) 00:10:16.759 9.387 - 9.434: 99.3015% ( 1) 00:10:16.759 9.719 - 9.766: 99.3099% ( 1) 00:10:16.759 10.335 - 10.382: 99.3184% ( 1) 00:10:16.759 11.615 - 11.662: 99.3268% ( 1) 00:10:16.759 13.748 - 13.843: 99.3436% ( 2) 00:10:16.759 15.739 - 15.834: 9[2024-04-26 14:14:57.931886] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:16.759 9.3520% ( 1) 00:10:16.759 30.720 - 30.910: 99.3604% ( 1) 00:10:16.759 3980.705 - 4004.978: 99.7980% ( 52) 00:10:16.759 4004.978 - 4029.250: 99.9916% ( 23) 00:10:16.759 6990.507 - 7039.052: 100.0000% ( 1) 00:10:16.759 00:10:16.759 14:14:57 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:16.759 14:14:57 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:16.759 14:14:57 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:16.759 14:14:57 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:16.759 14:14:57 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:16.759 [ 00:10:16.759 { 00:10:16.759 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:16.759 "subtype": "Discovery", 00:10:16.759 "listen_addresses": [], 00:10:16.759 "allow_any_host": true, 00:10:16.759 "hosts": [] 00:10:16.759 }, 00:10:16.759 { 00:10:16.759 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:16.759 "subtype": "NVMe", 00:10:16.759 "listen_addresses": [ 00:10:16.759 { 00:10:16.759 "transport": "VFIOUSER", 00:10:16.759 "trtype": "VFIOUSER", 00:10:16.759 "adrfam": "IPv4", 00:10:16.759 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:16.759 "trsvcid": "0" 00:10:16.759 } 00:10:16.759 ], 00:10:16.759 "allow_any_host": true, 00:10:16.759 "hosts": [], 00:10:16.759 "serial_number": "SPDK1", 00:10:16.759 "model_number": "SPDK bdev Controller", 00:10:16.759 "max_namespaces": 32, 00:10:16.759 "min_cntlid": 1, 00:10:16.759 "max_cntlid": 65519, 00:10:16.759 "namespaces": [ 00:10:16.759 { 00:10:16.759 "nsid": 1, 00:10:16.759 "bdev_name": "Malloc1", 00:10:16.759 "name": "Malloc1", 00:10:16.759 "nguid": "F26ACF7AD7D24DBEAEB20BFEA970C77E", 00:10:16.759 "uuid": "f26acf7a-d7d2-4dbe-aeb2-0bfea970c77e" 00:10:16.759 }, 00:10:16.759 { 00:10:16.759 "nsid": 2, 00:10:16.759 "bdev_name": "Malloc3", 00:10:16.759 "name": "Malloc3", 00:10:16.759 "nguid": "E47C5D2890854DEFBE711CDD1703F1AB", 00:10:16.759 "uuid": "e47c5d28-9085-4def-be71-1cdd1703f1ab" 00:10:16.759 } 00:10:16.759 ] 00:10:16.759 }, 00:10:16.759 { 00:10:16.759 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:16.759 "subtype": "NVMe", 00:10:16.759 "listen_addresses": [ 00:10:16.759 { 00:10:16.759 "transport": "VFIOUSER", 00:10:16.759 "trtype": "VFIOUSER", 00:10:16.759 "adrfam": "IPv4", 00:10:16.759 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:16.759 "trsvcid": "0" 00:10:16.759 } 00:10:16.759 ], 00:10:16.759 "allow_any_host": true, 00:10:16.759 "hosts": [], 00:10:16.759 "serial_number": "SPDK2", 00:10:16.759 "model_number": "SPDK bdev Controller", 00:10:16.759 "max_namespaces": 32, 00:10:16.759 "min_cntlid": 1, 00:10:16.759 "max_cntlid": 65519, 00:10:16.759 "namespaces": [ 00:10:16.759 { 00:10:16.759 "nsid": 1, 00:10:16.759 "bdev_name": "Malloc2", 00:10:16.759 "name": "Malloc2", 00:10:16.759 "nguid": "8DAA769721EB4C4D9BD1B6777A34807E", 00:10:16.759 "uuid": "8daa7697-21eb-4c4d-9bd1-b6777a34807e" 00:10:16.759 } 00:10:16.759 ] 00:10:16.759 } 00:10:16.759 ] 00:10:16.759 14:14:58 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:16.759 14:14:58 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3115285 00:10:16.759 14:14:58 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:16.759 14:14:58 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:16.759 14:14:58 -- common/autotest_common.sh@1251 -- # local i=0 00:10:16.759 14:14:58 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:16.759 14:14:58 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:16.759 14:14:58 -- common/autotest_common.sh@1262 -- # return 0 00:10:16.759 14:14:58 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:16.759 14:14:58 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:17.045 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.045 [2024-04-26 14:14:58.449176] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:17.045 Malloc4 00:10:17.338 14:14:58 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:17.338 [2024-04-26 14:14:58.892413] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:17.599 14:14:58 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:17.599 Asynchronous Event Request test 00:10:17.599 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:17.599 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:17.599 Registering asynchronous event callbacks... 00:10:17.599 Starting namespace attribute notice tests for all controllers... 00:10:17.599 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:17.599 aer_cb - Changed Namespace 00:10:17.599 Cleaning up... 00:10:17.859 [ 00:10:17.859 { 00:10:17.859 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:17.859 "subtype": "Discovery", 00:10:17.859 "listen_addresses": [], 00:10:17.859 "allow_any_host": true, 00:10:17.859 "hosts": [] 00:10:17.859 }, 00:10:17.859 { 00:10:17.859 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:17.859 "subtype": "NVMe", 00:10:17.859 "listen_addresses": [ 00:10:17.859 { 00:10:17.859 "transport": "VFIOUSER", 00:10:17.859 "trtype": "VFIOUSER", 00:10:17.859 "adrfam": "IPv4", 00:10:17.859 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:17.859 "trsvcid": "0" 00:10:17.859 } 00:10:17.859 ], 00:10:17.859 "allow_any_host": true, 00:10:17.859 "hosts": [], 00:10:17.859 "serial_number": "SPDK1", 00:10:17.859 "model_number": "SPDK bdev Controller", 00:10:17.859 "max_namespaces": 32, 00:10:17.859 "min_cntlid": 1, 00:10:17.859 "max_cntlid": 65519, 00:10:17.859 "namespaces": [ 00:10:17.859 { 00:10:17.859 "nsid": 1, 00:10:17.859 "bdev_name": "Malloc1", 00:10:17.859 "name": "Malloc1", 00:10:17.859 "nguid": "F26ACF7AD7D24DBEAEB20BFEA970C77E", 00:10:17.859 "uuid": "f26acf7a-d7d2-4dbe-aeb2-0bfea970c77e" 00:10:17.859 }, 00:10:17.859 { 00:10:17.859 "nsid": 2, 00:10:17.859 "bdev_name": "Malloc3", 00:10:17.859 "name": "Malloc3", 00:10:17.859 "nguid": "E47C5D2890854DEFBE711CDD1703F1AB", 00:10:17.859 "uuid": "e47c5d28-9085-4def-be71-1cdd1703f1ab" 00:10:17.859 } 00:10:17.859 ] 00:10:17.859 }, 00:10:17.859 { 00:10:17.859 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:17.859 "subtype": "NVMe", 00:10:17.859 "listen_addresses": [ 00:10:17.859 { 00:10:17.859 "transport": "VFIOUSER", 00:10:17.859 "trtype": "VFIOUSER", 00:10:17.859 "adrfam": "IPv4", 00:10:17.859 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:17.859 "trsvcid": "0" 00:10:17.859 } 00:10:17.859 ], 00:10:17.859 "allow_any_host": true, 00:10:17.859 "hosts": [], 00:10:17.859 "serial_number": "SPDK2", 00:10:17.859 "model_number": "SPDK bdev Controller", 00:10:17.859 "max_namespaces": 32, 00:10:17.859 "min_cntlid": 1, 00:10:17.859 "max_cntlid": 65519, 00:10:17.859 "namespaces": [ 00:10:17.859 { 00:10:17.859 "nsid": 1, 00:10:17.859 "bdev_name": "Malloc2", 00:10:17.859 "name": "Malloc2", 00:10:17.859 "nguid": "8DAA769721EB4C4D9BD1B6777A34807E", 00:10:17.859 "uuid": "8daa7697-21eb-4c4d-9bd1-b6777a34807e" 00:10:17.859 }, 00:10:17.859 { 00:10:17.859 "nsid": 2, 00:10:17.859 "bdev_name": "Malloc4", 00:10:17.859 "name": "Malloc4", 00:10:17.859 "nguid": "CD6466B656D748B3B6A119F453D812FF", 00:10:17.859 "uuid": "cd6466b6-56d7-48b3-b6a1-19f453d812ff" 00:10:17.859 } 00:10:17.859 ] 00:10:17.859 } 00:10:17.859 ] 00:10:17.859 14:14:59 -- target/nvmf_vfio_user.sh@44 -- # wait 3115285 00:10:17.859 14:14:59 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:17.859 14:14:59 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3110914 00:10:17.859 14:14:59 -- common/autotest_common.sh@936 -- # '[' -z 3110914 ']' 00:10:17.859 14:14:59 -- common/autotest_common.sh@940 -- # kill -0 3110914 00:10:17.859 14:14:59 -- common/autotest_common.sh@941 -- # uname 00:10:17.859 14:14:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:17.859 14:14:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3110914 00:10:17.859 14:14:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:17.859 14:14:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:17.859 14:14:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3110914' 00:10:17.859 killing process with pid 3110914 00:10:17.859 14:14:59 -- common/autotest_common.sh@955 -- # kill 3110914 00:10:17.859 [2024-04-26 14:14:59.227715] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:10:17.859 14:14:59 -- common/autotest_common.sh@960 -- # wait 3110914 00:10:18.118 14:14:59 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:18.118 14:14:59 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:18.118 14:14:59 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:18.118 14:14:59 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:18.118 14:14:59 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:18.118 14:14:59 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3115405 00:10:18.118 14:14:59 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:18.118 14:14:59 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3115405' 00:10:18.118 Process pid: 3115405 00:10:18.118 14:14:59 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:18.118 14:14:59 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3115405 00:10:18.118 14:14:59 -- common/autotest_common.sh@817 -- # '[' -z 3115405 ']' 00:10:18.118 14:14:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.118 14:14:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:18.118 14:14:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.118 14:14:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:18.118 14:14:59 -- common/autotest_common.sh@10 -- # set +x 00:10:18.118 [2024-04-26 14:14:59.560931] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:18.118 [2024-04-26 14:14:59.562187] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:10:18.118 [2024-04-26 14:14:59.562256] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.118 EAL: No free 2048 kB hugepages reported on node 1 00:10:18.118 [2024-04-26 14:14:59.622851] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.377 [2024-04-26 14:14:59.739157] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.377 [2024-04-26 14:14:59.739214] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.377 [2024-04-26 14:14:59.739230] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.377 [2024-04-26 14:14:59.739243] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.377 [2024-04-26 14:14:59.739255] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.377 [2024-04-26 14:14:59.739337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.377 [2024-04-26 14:14:59.739389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.377 [2024-04-26 14:14:59.739438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.377 [2024-04-26 14:14:59.739442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.377 [2024-04-26 14:14:59.828324] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:10:18.377 [2024-04-26 14:14:59.828529] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:10:18.377 [2024-04-26 14:14:59.828776] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:10:18.377 [2024-04-26 14:14:59.829424] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:18.377 [2024-04-26 14:14:59.829533] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:10:18.377 14:14:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:18.377 14:14:59 -- common/autotest_common.sh@850 -- # return 0 00:10:18.377 14:14:59 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:19.311 14:15:00 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:19.877 14:15:01 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:19.877 14:15:01 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:19.877 14:15:01 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:19.877 14:15:01 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:19.877 14:15:01 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:19.877 Malloc1 00:10:20.137 14:15:01 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:20.399 14:15:01 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:20.659 14:15:02 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:20.917 14:15:02 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:20.917 14:15:02 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:20.917 14:15:02 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:21.176 Malloc2 00:10:21.176 14:15:02 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:21.434 14:15:02 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:21.692 14:15:03 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:22.258 14:15:03 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:22.258 14:15:03 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3115405 00:10:22.258 14:15:03 -- common/autotest_common.sh@936 -- # '[' -z 3115405 ']' 00:10:22.258 14:15:03 -- common/autotest_common.sh@940 -- # kill -0 3115405 00:10:22.258 14:15:03 -- common/autotest_common.sh@941 -- # uname 00:10:22.258 14:15:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:22.258 14:15:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3115405 00:10:22.258 14:15:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:22.259 14:15:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:22.259 14:15:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3115405' 00:10:22.259 killing process with pid 3115405 00:10:22.259 14:15:03 -- common/autotest_common.sh@955 -- # kill 3115405 00:10:22.259 14:15:03 -- common/autotest_common.sh@960 -- # wait 3115405 00:10:22.259 14:15:03 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:22.259 14:15:03 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:22.259 00:10:22.259 real 0m53.333s 00:10:22.259 user 3m30.632s 00:10:22.259 sys 0m4.397s 00:10:22.259 14:15:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:22.259 14:15:03 -- common/autotest_common.sh@10 -- # set +x 00:10:22.259 ************************************ 00:10:22.259 END TEST nvmf_vfio_user 00:10:22.259 ************************************ 00:10:22.517 14:15:03 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:22.517 14:15:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:22.517 14:15:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:22.517 14:15:03 -- common/autotest_common.sh@10 -- # set +x 00:10:22.517 ************************************ 00:10:22.517 START TEST nvmf_vfio_user_nvme_compliance 00:10:22.517 ************************************ 00:10:22.517 14:15:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:22.517 * Looking for test storage... 00:10:22.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:22.517 14:15:04 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.517 14:15:04 -- nvmf/common.sh@7 -- # uname -s 00:10:22.517 14:15:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.517 14:15:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.517 14:15:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.517 14:15:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.518 14:15:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.518 14:15:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.518 14:15:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.518 14:15:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.518 14:15:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.518 14:15:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.518 14:15:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:22.518 14:15:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:22.518 14:15:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.518 14:15:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.518 14:15:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.518 14:15:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.518 14:15:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.518 14:15:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.518 14:15:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.518 14:15:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.518 14:15:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.518 14:15:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.518 14:15:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.518 14:15:04 -- paths/export.sh@5 -- # export PATH 00:10:22.518 14:15:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.518 14:15:04 -- nvmf/common.sh@47 -- # : 0 00:10:22.518 14:15:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:22.518 14:15:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:22.518 14:15:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.518 14:15:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.518 14:15:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.518 14:15:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:22.518 14:15:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:22.518 14:15:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:22.518 14:15:04 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:22.518 14:15:04 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:22.518 14:15:04 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:22.518 14:15:04 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:22.518 14:15:04 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:22.518 14:15:04 -- compliance/compliance.sh@20 -- # nvmfpid=3116050 00:10:22.518 14:15:04 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:22.518 14:15:04 -- compliance/compliance.sh@21 -- # echo 'Process pid: 3116050' 00:10:22.518 Process pid: 3116050 00:10:22.518 14:15:04 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:22.518 14:15:04 -- compliance/compliance.sh@24 -- # waitforlisten 3116050 00:10:22.518 14:15:04 -- common/autotest_common.sh@817 -- # '[' -z 3116050 ']' 00:10:22.518 14:15:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.518 14:15:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:22.518 14:15:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.518 14:15:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:22.518 14:15:04 -- common/autotest_common.sh@10 -- # set +x 00:10:22.518 [2024-04-26 14:15:04.083448] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:10:22.518 [2024-04-26 14:15:04.083556] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.776 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.776 [2024-04-26 14:15:04.143320] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:22.776 [2024-04-26 14:15:04.257843] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.776 [2024-04-26 14:15:04.257906] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.776 [2024-04-26 14:15:04.257922] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.776 [2024-04-26 14:15:04.257935] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.776 [2024-04-26 14:15:04.257946] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.776 [2024-04-26 14:15:04.258033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.776 [2024-04-26 14:15:04.258116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.776 [2024-04-26 14:15:04.258147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.034 14:15:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:23.034 14:15:04 -- common/autotest_common.sh@850 -- # return 0 00:10:23.034 14:15:04 -- compliance/compliance.sh@26 -- # sleep 1 00:10:23.970 14:15:05 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:23.970 14:15:05 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:23.970 14:15:05 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:23.970 14:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.970 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:10:23.970 14:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.970 14:15:05 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:23.970 14:15:05 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:23.970 14:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.970 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:10:23.970 malloc0 00:10:23.970 14:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.970 14:15:05 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:23.970 14:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.970 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:10:23.970 14:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.970 14:15:05 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:23.970 14:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.970 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:10:23.970 14:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.970 14:15:05 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:23.970 14:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.970 14:15:05 -- common/autotest_common.sh@10 -- # set +x 00:10:23.970 14:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.970 14:15:05 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:23.970 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.228 00:10:24.228 00:10:24.228 CUnit - A unit testing framework for C - Version 2.1-3 00:10:24.229 http://cunit.sourceforge.net/ 00:10:24.229 00:10:24.229 00:10:24.229 Suite: nvme_compliance 00:10:24.229 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-26 14:15:05.600239] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:24.229 [2024-04-26 14:15:05.601738] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:24.229 [2024-04-26 14:15:05.601766] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:24.229 [2024-04-26 14:15:05.601780] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:24.229 [2024-04-26 14:15:05.603273] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:24.229 passed 00:10:24.229 Test: admin_identify_ctrlr_verify_fused ...[2024-04-26 14:15:05.713983] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:24.229 [2024-04-26 14:15:05.717009] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:24.229 passed 00:10:24.486 Test: admin_identify_ns ...[2024-04-26 14:15:05.827898] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:24.486 [2024-04-26 14:15:05.887660] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:24.486 [2024-04-26 14:15:05.895653] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:24.486 [2024-04-26 14:15:05.916810] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:24.486 passed 00:10:24.486 Test: admin_get_features_mandatory_features ...[2024-04-26 14:15:06.019732] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:24.486 [2024-04-26 14:15:06.022751] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:24.744 passed 00:10:24.744 Test: admin_get_features_optional_features ...[2024-04-26 14:15:06.123392] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:24.744 [2024-04-26 14:15:06.126418] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:24.744 passed 00:10:24.744 Test: admin_set_features_number_of_queues ...[2024-04-26 14:15:06.232820] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.002 [2024-04-26 14:15:06.338796] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.002 passed 00:10:25.002 Test: admin_get_log_page_mandatory_logs ...[2024-04-26 14:15:06.439226] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.002 [2024-04-26 14:15:06.442260] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.002 passed 00:10:25.002 Test: admin_get_log_page_with_lpo ...[2024-04-26 14:15:06.542659] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.260 [2024-04-26 14:15:06.612666] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:25.260 [2024-04-26 14:15:06.625755] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.260 passed 00:10:25.260 Test: fabric_property_get ...[2024-04-26 14:15:06.727188] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.260 [2024-04-26 14:15:06.728507] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:25.260 [2024-04-26 14:15:06.730216] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.260 passed 00:10:25.518 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-26 14:15:06.836912] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.518 [2024-04-26 14:15:06.838256] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:25.518 [2024-04-26 14:15:06.839950] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.518 passed 00:10:25.518 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-26 14:15:06.944159] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.518 [2024-04-26 14:15:07.027645] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:25.518 [2024-04-26 14:15:07.043650] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:25.518 [2024-04-26 14:15:07.048786] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.776 passed 00:10:25.776 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-26 14:15:07.147572] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.776 [2024-04-26 14:15:07.148912] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:25.776 [2024-04-26 14:15:07.150587] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:25.776 passed 00:10:25.776 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-26 14:15:07.252599] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:25.776 [2024-04-26 14:15:07.329658] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:26.034 [2024-04-26 14:15:07.353643] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:26.034 [2024-04-26 14:15:07.358794] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:26.034 passed 00:10:26.034 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-26 14:15:07.465529] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:26.034 [2024-04-26 14:15:07.466870] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:26.034 [2024-04-26 14:15:07.466912] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:26.034 [2024-04-26 14:15:07.468554] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:26.034 passed 00:10:26.034 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-26 14:15:07.571533] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:26.292 [2024-04-26 14:15:07.664662] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:26.292 [2024-04-26 14:15:07.672654] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:26.292 [2024-04-26 14:15:07.680652] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:26.292 [2024-04-26 14:15:07.688663] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:26.292 [2024-04-26 14:15:07.717795] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:26.292 passed 00:10:26.292 Test: admin_create_io_sq_verify_pc ...[2024-04-26 14:15:07.823079] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:26.292 [2024-04-26 14:15:07.839661] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:26.292 [2024-04-26 14:15:07.857565] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:26.550 passed 00:10:26.550 Test: admin_create_io_qp_max_qps ...[2024-04-26 14:15:07.962271] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:27.926 [2024-04-26 14:15:09.059654] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:27.926 [2024-04-26 14:15:09.437253] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:27.926 passed 00:10:28.184 Test: admin_create_io_sq_shared_cq ...[2024-04-26 14:15:09.542651] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:28.184 [2024-04-26 14:15:09.675643] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:28.184 [2024-04-26 14:15:09.712735] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:28.442 passed 00:10:28.442 00:10:28.442 Run Summary: Type Total Ran Passed Failed Inactive 00:10:28.442 suites 1 1 n/a 0 0 00:10:28.442 tests 18 18 18 0 0 00:10:28.442 asserts 360 360 360 0 n/a 00:10:28.442 00:10:28.442 Elapsed time = 1.744 seconds 00:10:28.442 14:15:09 -- compliance/compliance.sh@42 -- # killprocess 3116050 00:10:28.442 14:15:09 -- common/autotest_common.sh@936 -- # '[' -z 3116050 ']' 00:10:28.442 14:15:09 -- common/autotest_common.sh@940 -- # kill -0 3116050 00:10:28.442 14:15:09 -- common/autotest_common.sh@941 -- # uname 00:10:28.442 14:15:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:28.442 14:15:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3116050 00:10:28.442 14:15:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:28.442 14:15:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:28.442 14:15:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3116050' 00:10:28.442 killing process with pid 3116050 00:10:28.442 14:15:09 -- common/autotest_common.sh@955 -- # kill 3116050 00:10:28.442 14:15:09 -- common/autotest_common.sh@960 -- # wait 3116050 00:10:28.700 14:15:10 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:28.700 14:15:10 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:28.700 00:10:28.700 real 0m6.067s 00:10:28.700 user 0m17.049s 00:10:28.700 sys 0m0.514s 00:10:28.700 14:15:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:28.700 14:15:10 -- common/autotest_common.sh@10 -- # set +x 00:10:28.700 ************************************ 00:10:28.700 END TEST nvmf_vfio_user_nvme_compliance 00:10:28.700 ************************************ 00:10:28.700 14:15:10 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:28.700 14:15:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:28.700 14:15:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:28.700 14:15:10 -- common/autotest_common.sh@10 -- # set +x 00:10:28.701 ************************************ 00:10:28.701 START TEST nvmf_vfio_user_fuzz 00:10:28.701 ************************************ 00:10:28.701 14:15:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:28.701 * Looking for test storage... 00:10:28.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.701 14:15:10 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.701 14:15:10 -- nvmf/common.sh@7 -- # uname -s 00:10:28.701 14:15:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.701 14:15:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.701 14:15:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.701 14:15:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.701 14:15:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.701 14:15:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.701 14:15:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.701 14:15:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.701 14:15:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.701 14:15:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.701 14:15:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:28.701 14:15:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:28.701 14:15:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.701 14:15:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.701 14:15:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.701 14:15:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.701 14:15:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.701 14:15:10 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.701 14:15:10 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.701 14:15:10 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.701 14:15:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.701 14:15:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.701 14:15:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.701 14:15:10 -- paths/export.sh@5 -- # export PATH 00:10:28.701 14:15:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.701 14:15:10 -- nvmf/common.sh@47 -- # : 0 00:10:28.701 14:15:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:28.701 14:15:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:28.701 14:15:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.701 14:15:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.701 14:15:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.701 14:15:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:28.701 14:15:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:28.701 14:15:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:28.701 14:15:10 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:28.701 14:15:10 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:28.701 14:15:10 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:28.701 14:15:10 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:28.701 14:15:10 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:28.701 14:15:10 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:28.701 14:15:10 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:28.701 14:15:10 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3117166 00:10:28.701 14:15:10 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:28.701 14:15:10 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3117166' 00:10:28.701 Process pid: 3117166 00:10:28.701 14:15:10 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:28.701 14:15:10 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3117166 00:10:28.701 14:15:10 -- common/autotest_common.sh@817 -- # '[' -z 3117166 ']' 00:10:28.701 14:15:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.701 14:15:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:28.701 14:15:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.701 14:15:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:28.701 14:15:10 -- common/autotest_common.sh@10 -- # set +x 00:10:29.267 14:15:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:29.267 14:15:10 -- common/autotest_common.sh@850 -- # return 0 00:10:29.267 14:15:10 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:30.201 14:15:11 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:30.201 14:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:30.201 14:15:11 -- common/autotest_common.sh@10 -- # set +x 00:10:30.201 14:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:30.201 14:15:11 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:30.201 14:15:11 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:30.201 14:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:30.201 14:15:11 -- common/autotest_common.sh@10 -- # set +x 00:10:30.201 malloc0 00:10:30.201 14:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:30.201 14:15:11 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:30.201 14:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:30.201 14:15:11 -- common/autotest_common.sh@10 -- # set +x 00:10:30.201 14:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:30.201 14:15:11 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:30.201 14:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:30.201 14:15:11 -- common/autotest_common.sh@10 -- # set +x 00:10:30.201 14:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:30.201 14:15:11 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:30.201 14:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:30.201 14:15:11 -- common/autotest_common.sh@10 -- # set +x 00:10:30.201 14:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:30.201 14:15:11 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:30.201 14:15:11 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:02.274 Fuzzing completed. Shutting down the fuzz application 00:11:02.274 00:11:02.274 Dumping successful admin opcodes: 00:11:02.274 8, 9, 10, 24, 00:11:02.274 Dumping successful io opcodes: 00:11:02.274 0, 00:11:02.274 NS: 0x200003a1ef00 I/O qp, Total commands completed: 547583, total successful commands: 2106, random_seed: 3488963584 00:11:02.274 NS: 0x200003a1ef00 admin qp, Total commands completed: 69954, total successful commands: 549, random_seed: 2616528896 00:11:02.274 14:15:42 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:02.274 14:15:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:02.274 14:15:42 -- common/autotest_common.sh@10 -- # set +x 00:11:02.274 14:15:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:02.274 14:15:42 -- target/vfio_user_fuzz.sh@46 -- # killprocess 3117166 00:11:02.274 14:15:42 -- common/autotest_common.sh@936 -- # '[' -z 3117166 ']' 00:11:02.274 14:15:42 -- common/autotest_common.sh@940 -- # kill -0 3117166 00:11:02.274 14:15:42 -- common/autotest_common.sh@941 -- # uname 00:11:02.274 14:15:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:02.274 14:15:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3117166 00:11:02.274 14:15:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:02.274 14:15:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:02.274 14:15:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3117166' 00:11:02.274 killing process with pid 3117166 00:11:02.274 14:15:42 -- common/autotest_common.sh@955 -- # kill 3117166 00:11:02.274 14:15:42 -- common/autotest_common.sh@960 -- # wait 3117166 00:11:02.274 14:15:42 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:02.274 14:15:42 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:02.274 00:11:02.274 real 0m32.243s 00:11:02.274 user 0m32.663s 00:11:02.274 sys 0m25.978s 00:11:02.274 14:15:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:02.274 14:15:42 -- common/autotest_common.sh@10 -- # set +x 00:11:02.274 ************************************ 00:11:02.274 END TEST nvmf_vfio_user_fuzz 00:11:02.274 ************************************ 00:11:02.274 14:15:42 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:02.274 14:15:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:02.274 14:15:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:02.274 14:15:42 -- common/autotest_common.sh@10 -- # set +x 00:11:02.274 ************************************ 00:11:02.274 START TEST nvmf_host_management 00:11:02.274 ************************************ 00:11:02.275 14:15:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:02.275 * Looking for test storage... 00:11:02.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.275 14:15:42 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.275 14:15:42 -- nvmf/common.sh@7 -- # uname -s 00:11:02.275 14:15:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.275 14:15:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.275 14:15:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.275 14:15:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.275 14:15:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.275 14:15:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.275 14:15:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.275 14:15:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.275 14:15:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.275 14:15:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.275 14:15:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:02.275 14:15:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:02.275 14:15:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.275 14:15:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.275 14:15:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.275 14:15:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.275 14:15:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.275 14:15:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.275 14:15:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.275 14:15:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.275 14:15:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.275 14:15:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.275 14:15:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.275 14:15:42 -- paths/export.sh@5 -- # export PATH 00:11:02.275 14:15:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.275 14:15:42 -- nvmf/common.sh@47 -- # : 0 00:11:02.275 14:15:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.275 14:15:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.275 14:15:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.275 14:15:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.275 14:15:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.275 14:15:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.275 14:15:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.275 14:15:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.275 14:15:42 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:02.275 14:15:42 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:02.275 14:15:42 -- target/host_management.sh@105 -- # nvmftestinit 00:11:02.275 14:15:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:02.275 14:15:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.275 14:15:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:02.275 14:15:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:02.275 14:15:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:02.275 14:15:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.275 14:15:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.275 14:15:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.275 14:15:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:02.275 14:15:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:02.275 14:15:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:02.275 14:15:42 -- common/autotest_common.sh@10 -- # set +x 00:11:02.534 14:15:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:02.534 14:15:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:02.534 14:15:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:02.534 14:15:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:02.534 14:15:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:02.534 14:15:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:02.534 14:15:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:02.534 14:15:44 -- nvmf/common.sh@295 -- # net_devs=() 00:11:02.534 14:15:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:02.534 14:15:44 -- nvmf/common.sh@296 -- # e810=() 00:11:02.534 14:15:44 -- nvmf/common.sh@296 -- # local -ga e810 00:11:02.534 14:15:44 -- nvmf/common.sh@297 -- # x722=() 00:11:02.534 14:15:44 -- nvmf/common.sh@297 -- # local -ga x722 00:11:02.534 14:15:44 -- nvmf/common.sh@298 -- # mlx=() 00:11:02.534 14:15:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:02.534 14:15:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.534 14:15:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.534 14:15:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.534 14:15:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.534 14:15:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.534 14:15:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.534 14:15:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.534 14:15:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.534 14:15:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.534 14:15:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.534 14:15:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.534 14:15:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:02.534 14:15:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:02.534 14:15:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:02.534 14:15:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:02.534 14:15:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:02.535 14:15:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:02.535 14:15:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.535 14:15:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:11:02.535 Found 0000:08:00.0 (0x8086 - 0x159b) 00:11:02.535 14:15:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.535 14:15:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.535 14:15:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.535 14:15:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.535 14:15:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.535 14:15:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:02.535 14:15:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:11:02.535 Found 0000:08:00.1 (0x8086 - 0x159b) 00:11:02.535 14:15:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:02.535 14:15:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:02.535 14:15:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.535 14:15:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.535 14:15:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:02.535 14:15:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:02.535 14:15:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:02.535 14:15:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:02.535 14:15:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.535 14:15:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.535 14:15:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:02.535 14:15:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.535 14:15:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:11:02.535 Found net devices under 0000:08:00.0: cvl_0_0 00:11:02.535 14:15:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.535 14:15:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:02.535 14:15:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.535 14:15:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:02.535 14:15:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.535 14:15:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:11:02.535 Found net devices under 0000:08:00.1: cvl_0_1 00:11:02.535 14:15:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.535 14:15:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:02.535 14:15:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:02.535 14:15:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:02.535 14:15:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:02.535 14:15:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:02.535 14:15:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.535 14:15:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.535 14:15:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.535 14:15:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:02.535 14:15:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.535 14:15:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.535 14:15:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:02.535 14:15:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.535 14:15:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.535 14:15:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:02.535 14:15:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:02.535 14:15:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.535 14:15:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.793 14:15:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.793 14:15:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.793 14:15:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:02.793 14:15:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.793 14:15:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.793 14:15:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.793 14:15:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:02.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:11:02.793 00:11:02.793 --- 10.0.0.2 ping statistics --- 00:11:02.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.793 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:11:02.793 14:15:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:11:02.793 00:11:02.793 --- 10.0.0.1 ping statistics --- 00:11:02.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.793 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:11:02.793 14:15:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.793 14:15:44 -- nvmf/common.sh@411 -- # return 0 00:11:02.793 14:15:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:02.793 14:15:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.793 14:15:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:02.793 14:15:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:02.793 14:15:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.793 14:15:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:02.793 14:15:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:02.793 14:15:44 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:11:02.793 14:15:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:02.793 14:15:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:02.793 14:15:44 -- common/autotest_common.sh@10 -- # set +x 00:11:02.793 ************************************ 00:11:02.793 START TEST nvmf_host_management 00:11:02.793 ************************************ 00:11:02.793 14:15:44 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:11:02.793 14:15:44 -- target/host_management.sh@69 -- # starttarget 00:11:02.793 14:15:44 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:02.793 14:15:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:02.793 14:15:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:02.793 14:15:44 -- common/autotest_common.sh@10 -- # set +x 00:11:02.793 14:15:44 -- nvmf/common.sh@470 -- # nvmfpid=3121432 00:11:02.793 14:15:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:02.793 14:15:44 -- nvmf/common.sh@471 -- # waitforlisten 3121432 00:11:02.793 14:15:44 -- common/autotest_common.sh@817 -- # '[' -z 3121432 ']' 00:11:02.794 14:15:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.794 14:15:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:02.794 14:15:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.794 14:15:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:02.794 14:15:44 -- common/autotest_common.sh@10 -- # set +x 00:11:03.052 [2024-04-26 14:15:44.375054] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:11:03.052 [2024-04-26 14:15:44.375137] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.052 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.052 [2024-04-26 14:15:44.439290] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.052 [2024-04-26 14:15:44.556173] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.052 [2024-04-26 14:15:44.556231] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.052 [2024-04-26 14:15:44.556247] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.052 [2024-04-26 14:15:44.556260] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.052 [2024-04-26 14:15:44.556272] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.052 [2024-04-26 14:15:44.556360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.052 [2024-04-26 14:15:44.556415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.052 [2024-04-26 14:15:44.556466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:03.052 [2024-04-26 14:15:44.556469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.310 14:15:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:03.310 14:15:44 -- common/autotest_common.sh@850 -- # return 0 00:11:03.310 14:15:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:03.310 14:15:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:03.310 14:15:44 -- common/autotest_common.sh@10 -- # set +x 00:11:03.310 14:15:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.310 14:15:44 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:03.310 14:15:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.310 14:15:44 -- common/autotest_common.sh@10 -- # set +x 00:11:03.310 [2024-04-26 14:15:44.703321] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.310 14:15:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.310 14:15:44 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:03.310 14:15:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:03.310 14:15:44 -- common/autotest_common.sh@10 -- # set +x 00:11:03.310 14:15:44 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:03.310 14:15:44 -- target/host_management.sh@23 -- # cat 00:11:03.310 14:15:44 -- target/host_management.sh@30 -- # rpc_cmd 00:11:03.310 14:15:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.310 14:15:44 -- common/autotest_common.sh@10 -- # set +x 00:11:03.310 Malloc0 00:11:03.310 [2024-04-26 14:15:44.760186] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:03.310 14:15:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.310 14:15:44 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:03.310 14:15:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:03.310 14:15:44 -- common/autotest_common.sh@10 -- # set +x 00:11:03.310 14:15:44 -- target/host_management.sh@73 -- # perfpid=3121476 00:11:03.310 14:15:44 -- target/host_management.sh@74 -- # waitforlisten 3121476 /var/tmp/bdevperf.sock 00:11:03.310 14:15:44 -- common/autotest_common.sh@817 -- # '[' -z 3121476 ']' 00:11:03.310 14:15:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:03.310 14:15:44 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:03.310 14:15:44 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:03.310 14:15:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:03.310 14:15:44 -- nvmf/common.sh@521 -- # config=() 00:11:03.310 14:15:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:03.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:03.310 14:15:44 -- nvmf/common.sh@521 -- # local subsystem config 00:11:03.310 14:15:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:03.310 14:15:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:03.310 14:15:44 -- common/autotest_common.sh@10 -- # set +x 00:11:03.311 14:15:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:03.311 { 00:11:03.311 "params": { 00:11:03.311 "name": "Nvme$subsystem", 00:11:03.311 "trtype": "$TEST_TRANSPORT", 00:11:03.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:03.311 "adrfam": "ipv4", 00:11:03.311 "trsvcid": "$NVMF_PORT", 00:11:03.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:03.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:03.311 "hdgst": ${hdgst:-false}, 00:11:03.311 "ddgst": ${ddgst:-false} 00:11:03.311 }, 00:11:03.311 "method": "bdev_nvme_attach_controller" 00:11:03.311 } 00:11:03.311 EOF 00:11:03.311 )") 00:11:03.311 14:15:44 -- nvmf/common.sh@543 -- # cat 00:11:03.311 14:15:44 -- nvmf/common.sh@545 -- # jq . 00:11:03.311 14:15:44 -- nvmf/common.sh@546 -- # IFS=, 00:11:03.311 14:15:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:03.311 "params": { 00:11:03.311 "name": "Nvme0", 00:11:03.311 "trtype": "tcp", 00:11:03.311 "traddr": "10.0.0.2", 00:11:03.311 "adrfam": "ipv4", 00:11:03.311 "trsvcid": "4420", 00:11:03.311 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:03.311 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:03.311 "hdgst": false, 00:11:03.311 "ddgst": false 00:11:03.311 }, 00:11:03.311 "method": "bdev_nvme_attach_controller" 00:11:03.311 }' 00:11:03.311 [2024-04-26 14:15:44.842661] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:11:03.311 [2024-04-26 14:15:44.842757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3121476 ] 00:11:03.311 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.569 [2024-04-26 14:15:44.904295] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.569 [2024-04-26 14:15:45.019380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.827 Running I/O for 10 seconds... 00:11:03.827 14:15:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:03.827 14:15:45 -- common/autotest_common.sh@850 -- # return 0 00:11:03.827 14:15:45 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:03.827 14:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.827 14:15:45 -- common/autotest_common.sh@10 -- # set +x 00:11:03.827 14:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.827 14:15:45 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:03.827 14:15:45 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:03.827 14:15:45 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:03.827 14:15:45 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:03.827 14:15:45 -- target/host_management.sh@52 -- # local ret=1 00:11:03.827 14:15:45 -- target/host_management.sh@53 -- # local i 00:11:03.827 14:15:45 -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:03.827 14:15:45 -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:03.827 14:15:45 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:03.827 14:15:45 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:03.827 14:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.827 14:15:45 -- common/autotest_common.sh@10 -- # set +x 00:11:03.827 14:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.827 14:15:45 -- target/host_management.sh@55 -- # read_io_count=67 00:11:03.827 14:15:45 -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:11:03.827 14:15:45 -- target/host_management.sh@62 -- # sleep 0.25 00:11:04.085 14:15:45 -- target/host_management.sh@54 -- # (( i-- )) 00:11:04.085 14:15:45 -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:04.085 14:15:45 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:04.085 14:15:45 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:04.085 14:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.085 14:15:45 -- common/autotest_common.sh@10 -- # set +x 00:11:04.085 14:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.085 14:15:45 -- target/host_management.sh@55 -- # read_io_count=515 00:11:04.085 14:15:45 -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:11:04.086 14:15:45 -- target/host_management.sh@59 -- # ret=0 00:11:04.086 14:15:45 -- target/host_management.sh@60 -- # break 00:11:04.086 14:15:45 -- target/host_management.sh@64 -- # return 0 00:11:04.086 14:15:45 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:04.086 14:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.086 14:15:45 -- common/autotest_common.sh@10 -- # set +x 00:11:04.086 [2024-04-26 14:15:45.647126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b15840 is same with the state(5) to be set 00:11:04.086 [2024-04-26 14:15:45.647208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b15840 is same with the state(5) to be set 00:11:04.086 [2024-04-26 14:15:45.647224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b15840 is same with the state(5) to be set 00:11:04.086 [2024-04-26 14:15:45.647237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b15840 is same with the state(5) to be set 00:11:04.086 [2024-04-26 14:15:45.647250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b15840 is same with the state(5) to be set 00:11:04.086 [2024-04-26 14:15:45.647264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b15840 is same with the state(5) to be set 00:11:04.086 [2024-04-26 14:15:45.647288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b15840 is same with the state(5) to be set 00:11:04.086 [2024-04-26 14:15:45.647301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b15840 is same with the state(5) to be set 00:11:04.086 [2024-04-26 14:15:45.647314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b15840 is same with the state(5) to be set 00:11:04.086 [2024-04-26 14:15:45.647327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b15840 is same with the state(5) to be set 00:11:04.086 [2024-04-26 14:15:45.647349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b15840 is same with the state(5) to be set 00:11:04.086 14:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.086 14:15:45 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:04.086 14:15:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.086 14:15:45 -- common/autotest_common.sh@10 -- # set +x 00:11:04.086 [2024-04-26 14:15:45.654274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.086 [2024-04-26 14:15:45.654325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.086 [2024-04-26 14:15:45.654345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.086 [2024-04-26 14:15:45.654361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.086 [2024-04-26 14:15:45.654378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.086 [2024-04-26 14:15:45.654402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.086 [2024-04-26 14:15:45.654419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.086 [2024-04-26 14:15:45.654434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.086 [2024-04-26 14:15:45.654449] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cec210 is same with the state(5) to be set 00:11:04.086 [2024-04-26 14:15:45.654875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.086 [2024-04-26 14:15:45.654907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.086 [2024-04-26 14:15:45.654952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.086 [2024-04-26 14:15:45.654982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.086 [2024-04-26 14:15:45.655014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.086 [2024-04-26 14:15:45.655042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.086 [2024-04-26 14:15:45.655074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.655970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.655989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.656006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.656024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.656041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.656059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.656075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.656093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.656110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.656128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.346 [2024-04-26 14:15:45.656145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.346 [2024-04-26 14:15:45.656163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.656967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.656984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.657002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.657018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.657036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.657053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.657071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.657087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.657106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.657123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.657141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.657159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.657177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.657193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.657211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.657228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.657246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.657265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.657284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.657301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.657320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.657336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.657355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:04.347 [2024-04-26 14:15:45.657372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.347 [2024-04-26 14:15:45.657448] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x213d8d0 was disconnected and freed. reset controller. 00:11:04.347 [2024-04-26 14:15:45.658773] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:04.347 14:15:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.347 14:15:45 -- target/host_management.sh@87 -- # sleep 1 00:11:04.347 task offset: 73728 on job bdev=Nvme0n1 fails 00:11:04.347 00:11:04.347 Latency(us) 00:11:04.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.347 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:04.347 Job: Nvme0n1 ended in about 0.42 seconds with error 00:11:04.347 Verification LBA range: start 0x0 length 0x400 00:11:04.347 Nvme0n1 : 0.42 1362.71 85.17 151.41 0.00 40839.19 3689.43 39030.33 00:11:04.347 =================================================================================================================== 00:11:04.347 Total : 1362.71 85.17 151.41 0.00 40839.19 3689.43 39030.33 00:11:04.347 [2024-04-26 14:15:45.661026] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:04.347 [2024-04-26 14:15:45.661063] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cec210 (9): Bad file descriptor 00:11:04.347 [2024-04-26 14:15:45.666165] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:05.282 14:15:46 -- target/host_management.sh@91 -- # kill -9 3121476 00:11:05.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3121476) - No such process 00:11:05.282 14:15:46 -- target/host_management.sh@91 -- # true 00:11:05.282 14:15:46 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:05.282 14:15:46 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:05.282 14:15:46 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:05.282 14:15:46 -- nvmf/common.sh@521 -- # config=() 00:11:05.282 14:15:46 -- nvmf/common.sh@521 -- # local subsystem config 00:11:05.282 14:15:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:05.282 14:15:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:05.282 { 00:11:05.282 "params": { 00:11:05.282 "name": "Nvme$subsystem", 00:11:05.282 "trtype": "$TEST_TRANSPORT", 00:11:05.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:05.282 "adrfam": "ipv4", 00:11:05.282 "trsvcid": "$NVMF_PORT", 00:11:05.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:05.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:05.282 "hdgst": ${hdgst:-false}, 00:11:05.282 "ddgst": ${ddgst:-false} 00:11:05.282 }, 00:11:05.282 "method": "bdev_nvme_attach_controller" 00:11:05.282 } 00:11:05.282 EOF 00:11:05.282 )") 00:11:05.282 14:15:46 -- nvmf/common.sh@543 -- # cat 00:11:05.282 14:15:46 -- nvmf/common.sh@545 -- # jq . 00:11:05.282 14:15:46 -- nvmf/common.sh@546 -- # IFS=, 00:11:05.282 14:15:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:05.282 "params": { 00:11:05.282 "name": "Nvme0", 00:11:05.282 "trtype": "tcp", 00:11:05.282 "traddr": "10.0.0.2", 00:11:05.282 "adrfam": "ipv4", 00:11:05.282 "trsvcid": "4420", 00:11:05.282 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:05.282 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:05.282 "hdgst": false, 00:11:05.282 "ddgst": false 00:11:05.282 }, 00:11:05.282 "method": "bdev_nvme_attach_controller" 00:11:05.282 }' 00:11:05.282 [2024-04-26 14:15:46.709998] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:11:05.282 [2024-04-26 14:15:46.710100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3121690 ] 00:11:05.282 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.282 [2024-04-26 14:15:46.769892] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.541 [2024-04-26 14:15:46.884528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.800 Running I/O for 1 seconds... 00:11:06.736 00:11:06.736 Latency(us) 00:11:06.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:06.736 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:06.736 Verification LBA range: start 0x0 length 0x400 00:11:06.736 Nvme0n1 : 1.01 1391.86 86.99 0.00 0.00 45110.89 7864.32 39807.05 00:11:06.736 =================================================================================================================== 00:11:06.736 Total : 1391.86 86.99 0.00 0.00 45110.89 7864.32 39807.05 00:11:06.994 14:15:48 -- target/host_management.sh@102 -- # stoptarget 00:11:06.994 14:15:48 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:06.994 14:15:48 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:06.994 14:15:48 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:06.994 14:15:48 -- target/host_management.sh@40 -- # nvmftestfini 00:11:06.994 14:15:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:06.994 14:15:48 -- nvmf/common.sh@117 -- # sync 00:11:06.994 14:15:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:06.994 14:15:48 -- nvmf/common.sh@120 -- # set +e 00:11:06.994 14:15:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:06.994 14:15:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:06.994 rmmod nvme_tcp 00:11:06.994 rmmod nvme_fabrics 00:11:06.994 rmmod nvme_keyring 00:11:06.994 14:15:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:06.994 14:15:48 -- nvmf/common.sh@124 -- # set -e 00:11:06.994 14:15:48 -- nvmf/common.sh@125 -- # return 0 00:11:06.994 14:15:48 -- nvmf/common.sh@478 -- # '[' -n 3121432 ']' 00:11:06.994 14:15:48 -- nvmf/common.sh@479 -- # killprocess 3121432 00:11:06.995 14:15:48 -- common/autotest_common.sh@936 -- # '[' -z 3121432 ']' 00:11:06.995 14:15:48 -- common/autotest_common.sh@940 -- # kill -0 3121432 00:11:06.995 14:15:48 -- common/autotest_common.sh@941 -- # uname 00:11:06.995 14:15:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:06.995 14:15:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3121432 00:11:06.995 14:15:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:06.995 14:15:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:06.995 14:15:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3121432' 00:11:06.995 killing process with pid 3121432 00:11:06.995 14:15:48 -- common/autotest_common.sh@955 -- # kill 3121432 00:11:06.995 14:15:48 -- common/autotest_common.sh@960 -- # wait 3121432 00:11:07.255 [2024-04-26 14:15:48.737917] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:07.255 14:15:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:07.255 14:15:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:07.255 14:15:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:07.255 14:15:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:07.255 14:15:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:07.255 14:15:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.255 14:15:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.255 14:15:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.819 14:15:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:09.819 00:11:09.819 real 0m6.496s 00:11:09.819 user 0m19.201s 00:11:09.819 sys 0m1.068s 00:11:09.819 14:15:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:09.819 14:15:50 -- common/autotest_common.sh@10 -- # set +x 00:11:09.819 ************************************ 00:11:09.819 END TEST nvmf_host_management 00:11:09.819 ************************************ 00:11:09.819 14:15:50 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:09.819 00:11:09.819 real 0m8.321s 00:11:09.819 user 0m19.802s 00:11:09.819 sys 0m2.282s 00:11:09.819 14:15:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:09.819 14:15:50 -- common/autotest_common.sh@10 -- # set +x 00:11:09.819 ************************************ 00:11:09.819 END TEST nvmf_host_management 00:11:09.819 ************************************ 00:11:09.819 14:15:50 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:09.819 14:15:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:09.819 14:15:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:09.819 14:15:50 -- common/autotest_common.sh@10 -- # set +x 00:11:09.819 ************************************ 00:11:09.819 START TEST nvmf_lvol 00:11:09.819 ************************************ 00:11:09.819 14:15:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:09.819 * Looking for test storage... 00:11:09.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.819 14:15:51 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.819 14:15:51 -- nvmf/common.sh@7 -- # uname -s 00:11:09.819 14:15:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.819 14:15:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.819 14:15:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.819 14:15:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.819 14:15:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.819 14:15:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.819 14:15:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.819 14:15:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.819 14:15:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.819 14:15:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.819 14:15:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:09.819 14:15:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:09.819 14:15:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.819 14:15:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.819 14:15:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.819 14:15:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.819 14:15:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.819 14:15:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.819 14:15:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.819 14:15:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.819 14:15:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.819 14:15:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.819 14:15:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.819 14:15:51 -- paths/export.sh@5 -- # export PATH 00:11:09.819 14:15:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.819 14:15:51 -- nvmf/common.sh@47 -- # : 0 00:11:09.819 14:15:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:09.819 14:15:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:09.819 14:15:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.819 14:15:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.819 14:15:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.819 14:15:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:09.819 14:15:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:09.819 14:15:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:09.819 14:15:51 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.819 14:15:51 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.819 14:15:51 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:09.819 14:15:51 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:09.819 14:15:51 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:09.819 14:15:51 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:09.819 14:15:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:09.819 14:15:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.819 14:15:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:09.819 14:15:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:09.819 14:15:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:09.819 14:15:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.819 14:15:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.819 14:15:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.819 14:15:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:09.819 14:15:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:09.819 14:15:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:09.819 14:15:51 -- common/autotest_common.sh@10 -- # set +x 00:11:11.194 14:15:52 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:11.194 14:15:52 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:11.194 14:15:52 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:11.194 14:15:52 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:11.194 14:15:52 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:11.194 14:15:52 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:11.194 14:15:52 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:11.194 14:15:52 -- nvmf/common.sh@295 -- # net_devs=() 00:11:11.194 14:15:52 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:11.194 14:15:52 -- nvmf/common.sh@296 -- # e810=() 00:11:11.194 14:15:52 -- nvmf/common.sh@296 -- # local -ga e810 00:11:11.194 14:15:52 -- nvmf/common.sh@297 -- # x722=() 00:11:11.194 14:15:52 -- nvmf/common.sh@297 -- # local -ga x722 00:11:11.194 14:15:52 -- nvmf/common.sh@298 -- # mlx=() 00:11:11.194 14:15:52 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:11.194 14:15:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:11.194 14:15:52 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:11.194 14:15:52 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:11.194 14:15:52 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:11.194 14:15:52 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:11.194 14:15:52 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:11.194 14:15:52 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:11.194 14:15:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:11.194 14:15:52 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:11.194 14:15:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:11.194 14:15:52 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:11.194 14:15:52 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:11.194 14:15:52 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:11.194 14:15:52 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:11.194 14:15:52 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:11.194 14:15:52 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:11.194 14:15:52 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:11.194 14:15:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:11.194 14:15:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:11:11.194 Found 0000:08:00.0 (0x8086 - 0x159b) 00:11:11.194 14:15:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:11.194 14:15:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:11.194 14:15:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.194 14:15:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.194 14:15:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:11.194 14:15:52 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:11.194 14:15:52 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:11:11.194 Found 0000:08:00.1 (0x8086 - 0x159b) 00:11:11.194 14:15:52 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:11.194 14:15:52 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:11.194 14:15:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:11.194 14:15:52 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:11.194 14:15:52 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:11.194 14:15:52 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:11.194 14:15:52 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:11.194 14:15:52 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:11.194 14:15:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:11.194 14:15:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.194 14:15:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:11.194 14:15:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.194 14:15:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:11:11.194 Found net devices under 0000:08:00.0: cvl_0_0 00:11:11.194 14:15:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.194 14:15:52 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:11.194 14:15:52 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.194 14:15:52 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:11.194 14:15:52 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.194 14:15:52 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:11:11.194 Found net devices under 0000:08:00.1: cvl_0_1 00:11:11.194 14:15:52 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.194 14:15:52 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:11.194 14:15:52 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:11.194 14:15:52 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:11.194 14:15:52 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:11.194 14:15:52 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:11.194 14:15:52 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.194 14:15:52 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.194 14:15:52 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:11.194 14:15:52 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:11.194 14:15:52 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:11.194 14:15:52 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:11.194 14:15:52 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:11.194 14:15:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:11.194 14:15:52 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.194 14:15:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:11.194 14:15:52 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:11.194 14:15:52 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:11.194 14:15:52 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:11.194 14:15:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:11.194 14:15:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:11.194 14:15:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:11.194 14:15:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:11.194 14:15:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:11.194 14:15:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:11.194 14:15:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:11.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:11:11.194 00:11:11.194 --- 10.0.0.2 ping statistics --- 00:11:11.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.194 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:11:11.194 14:15:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:11.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:11:11.194 00:11:11.194 --- 10.0.0.1 ping statistics --- 00:11:11.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.194 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:11:11.452 14:15:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.452 14:15:52 -- nvmf/common.sh@411 -- # return 0 00:11:11.452 14:15:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:11.452 14:15:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.452 14:15:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:11.452 14:15:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:11.452 14:15:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.452 14:15:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:11.452 14:15:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:11.452 14:15:52 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:11.452 14:15:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:11.452 14:15:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:11.452 14:15:52 -- common/autotest_common.sh@10 -- # set +x 00:11:11.452 14:15:52 -- nvmf/common.sh@470 -- # nvmfpid=3123322 00:11:11.452 14:15:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:11.452 14:15:52 -- nvmf/common.sh@471 -- # waitforlisten 3123322 00:11:11.452 14:15:52 -- common/autotest_common.sh@817 -- # '[' -z 3123322 ']' 00:11:11.452 14:15:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.452 14:15:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:11.452 14:15:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.452 14:15:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:11.452 14:15:52 -- common/autotest_common.sh@10 -- # set +x 00:11:11.452 [2024-04-26 14:15:52.841488] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:11:11.452 [2024-04-26 14:15:52.841576] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.452 EAL: No free 2048 kB hugepages reported on node 1 00:11:11.452 [2024-04-26 14:15:52.906776] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:11.710 [2024-04-26 14:15:53.023143] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.710 [2024-04-26 14:15:53.023194] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.710 [2024-04-26 14:15:53.023210] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.710 [2024-04-26 14:15:53.023224] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.710 [2024-04-26 14:15:53.023237] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.710 [2024-04-26 14:15:53.023323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.710 [2024-04-26 14:15:53.023376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.710 [2024-04-26 14:15:53.023380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.710 14:15:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:11.710 14:15:53 -- common/autotest_common.sh@850 -- # return 0 00:11:11.710 14:15:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:11.710 14:15:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:11.710 14:15:53 -- common/autotest_common.sh@10 -- # set +x 00:11:11.710 14:15:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.710 14:15:53 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:11.967 [2024-04-26 14:15:53.426082] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.967 14:15:53 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.225 14:15:53 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:12.225 14:15:53 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.792 14:15:54 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:12.792 14:15:54 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:13.050 14:15:54 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:13.307 14:15:54 -- target/nvmf_lvol.sh@29 -- # lvs=ff78ad92-dee2-4f6b-8411-01935a4861c1 00:11:13.307 14:15:54 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ff78ad92-dee2-4f6b-8411-01935a4861c1 lvol 20 00:11:13.565 14:15:54 -- target/nvmf_lvol.sh@32 -- # lvol=0c27a69e-a112-4835-9cc1-b903187b5bb5 00:11:13.565 14:15:54 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:13.822 14:15:55 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0c27a69e-a112-4835-9cc1-b903187b5bb5 00:11:14.079 14:15:55 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:14.079 [2024-04-26 14:15:55.634410] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.337 14:15:55 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:14.337 14:15:55 -- target/nvmf_lvol.sh@42 -- # perf_pid=3123656 00:11:14.337 14:15:55 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:14.337 14:15:55 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:14.595 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.530 14:15:56 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 0c27a69e-a112-4835-9cc1-b903187b5bb5 MY_SNAPSHOT 00:11:15.787 14:15:57 -- target/nvmf_lvol.sh@47 -- # snapshot=c4ff44dd-0076-4d88-be41-ce3394cb52e3 00:11:15.787 14:15:57 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 0c27a69e-a112-4835-9cc1-b903187b5bb5 30 00:11:16.044 14:15:57 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c4ff44dd-0076-4d88-be41-ce3394cb52e3 MY_CLONE 00:11:16.614 14:15:57 -- target/nvmf_lvol.sh@49 -- # clone=39087f44-15cd-400a-8978-c45577d21dc1 00:11:16.614 14:15:57 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 39087f44-15cd-400a-8978-c45577d21dc1 00:11:17.268 14:15:58 -- target/nvmf_lvol.sh@53 -- # wait 3123656 00:11:25.378 Initializing NVMe Controllers 00:11:25.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:25.378 Controller IO queue size 128, less than required. 00:11:25.378 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:25.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:25.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:25.378 Initialization complete. Launching workers. 00:11:25.378 ======================================================== 00:11:25.378 Latency(us) 00:11:25.378 Device Information : IOPS MiB/s Average min max 00:11:25.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9699.86 37.89 13203.93 2545.59 103422.67 00:11:25.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9686.36 37.84 13220.00 2545.36 76097.68 00:11:25.378 ======================================================== 00:11:25.378 Total : 19386.22 75.73 13211.96 2545.36 103422.67 00:11:25.378 00:11:25.378 14:16:06 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:25.378 14:16:06 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0c27a69e-a112-4835-9cc1-b903187b5bb5 00:11:25.378 14:16:06 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ff78ad92-dee2-4f6b-8411-01935a4861c1 00:11:25.636 14:16:07 -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:25.636 14:16:07 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:25.636 14:16:07 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:25.636 14:16:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:25.636 14:16:07 -- nvmf/common.sh@117 -- # sync 00:11:25.636 14:16:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:25.636 14:16:07 -- nvmf/common.sh@120 -- # set +e 00:11:25.636 14:16:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:25.636 14:16:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:25.636 rmmod nvme_tcp 00:11:25.636 rmmod nvme_fabrics 00:11:25.636 rmmod nvme_keyring 00:11:25.636 14:16:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:25.636 14:16:07 -- nvmf/common.sh@124 -- # set -e 00:11:25.636 14:16:07 -- nvmf/common.sh@125 -- # return 0 00:11:25.636 14:16:07 -- nvmf/common.sh@478 -- # '[' -n 3123322 ']' 00:11:25.636 14:16:07 -- nvmf/common.sh@479 -- # killprocess 3123322 00:11:25.636 14:16:07 -- common/autotest_common.sh@936 -- # '[' -z 3123322 ']' 00:11:25.636 14:16:07 -- common/autotest_common.sh@940 -- # kill -0 3123322 00:11:25.636 14:16:07 -- common/autotest_common.sh@941 -- # uname 00:11:25.636 14:16:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:25.636 14:16:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3123322 00:11:25.636 14:16:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:25.636 14:16:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:25.636 14:16:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3123322' 00:11:25.636 killing process with pid 3123322 00:11:25.636 14:16:07 -- common/autotest_common.sh@955 -- # kill 3123322 00:11:25.636 14:16:07 -- common/autotest_common.sh@960 -- # wait 3123322 00:11:25.894 14:16:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:25.894 14:16:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:25.894 14:16:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:25.894 14:16:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:25.894 14:16:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:25.894 14:16:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.894 14:16:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:25.894 14:16:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.434 14:16:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:28.434 00:11:28.434 real 0m18.436s 00:11:28.434 user 1m3.514s 00:11:28.434 sys 0m5.527s 00:11:28.434 14:16:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:28.434 14:16:09 -- common/autotest_common.sh@10 -- # set +x 00:11:28.434 ************************************ 00:11:28.434 END TEST nvmf_lvol 00:11:28.434 ************************************ 00:11:28.434 14:16:09 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:28.434 14:16:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:28.434 14:16:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:28.434 14:16:09 -- common/autotest_common.sh@10 -- # set +x 00:11:28.434 ************************************ 00:11:28.434 START TEST nvmf_lvs_grow 00:11:28.434 ************************************ 00:11:28.434 14:16:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:28.434 * Looking for test storage... 00:11:28.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.434 14:16:09 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.434 14:16:09 -- nvmf/common.sh@7 -- # uname -s 00:11:28.434 14:16:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.434 14:16:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.434 14:16:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.434 14:16:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.434 14:16:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.434 14:16:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.434 14:16:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.434 14:16:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.434 14:16:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.434 14:16:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.434 14:16:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:28.434 14:16:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:28.434 14:16:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.434 14:16:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.434 14:16:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.434 14:16:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.434 14:16:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.434 14:16:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.434 14:16:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.434 14:16:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.434 14:16:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.434 14:16:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.434 14:16:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.434 14:16:09 -- paths/export.sh@5 -- # export PATH 00:11:28.434 14:16:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.434 14:16:09 -- nvmf/common.sh@47 -- # : 0 00:11:28.434 14:16:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:28.434 14:16:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:28.434 14:16:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.434 14:16:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.434 14:16:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.434 14:16:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:28.434 14:16:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:28.434 14:16:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:28.434 14:16:09 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:28.434 14:16:09 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:28.434 14:16:09 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:11:28.434 14:16:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:28.434 14:16:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.434 14:16:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:28.434 14:16:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:28.434 14:16:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:28.434 14:16:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.434 14:16:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.434 14:16:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.434 14:16:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:28.434 14:16:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:28.434 14:16:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:28.434 14:16:09 -- common/autotest_common.sh@10 -- # set +x 00:11:29.811 14:16:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:29.811 14:16:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:29.811 14:16:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:29.811 14:16:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:29.811 14:16:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:29.811 14:16:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:29.811 14:16:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:29.811 14:16:11 -- nvmf/common.sh@295 -- # net_devs=() 00:11:29.811 14:16:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:29.811 14:16:11 -- nvmf/common.sh@296 -- # e810=() 00:11:29.811 14:16:11 -- nvmf/common.sh@296 -- # local -ga e810 00:11:29.811 14:16:11 -- nvmf/common.sh@297 -- # x722=() 00:11:29.811 14:16:11 -- nvmf/common.sh@297 -- # local -ga x722 00:11:29.811 14:16:11 -- nvmf/common.sh@298 -- # mlx=() 00:11:29.811 14:16:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:29.811 14:16:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.811 14:16:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.811 14:16:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.811 14:16:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.811 14:16:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.811 14:16:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.811 14:16:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.811 14:16:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.811 14:16:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.811 14:16:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.811 14:16:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.811 14:16:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:29.812 14:16:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:29.812 14:16:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:29.812 14:16:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:29.812 14:16:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:29.812 14:16:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:29.812 14:16:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.812 14:16:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:11:29.812 Found 0000:08:00.0 (0x8086 - 0x159b) 00:11:29.812 14:16:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:29.812 14:16:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:29.812 14:16:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.812 14:16:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.812 14:16:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:29.812 14:16:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:29.812 14:16:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:11:29.812 Found 0000:08:00.1 (0x8086 - 0x159b) 00:11:29.812 14:16:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:29.812 14:16:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:29.812 14:16:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.812 14:16:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.812 14:16:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:29.812 14:16:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:29.812 14:16:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:29.812 14:16:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:29.812 14:16:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.812 14:16:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.812 14:16:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:29.812 14:16:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.812 14:16:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:11:29.812 Found net devices under 0000:08:00.0: cvl_0_0 00:11:29.812 14:16:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.812 14:16:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:29.812 14:16:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.812 14:16:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:29.812 14:16:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.812 14:16:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:11:29.812 Found net devices under 0000:08:00.1: cvl_0_1 00:11:29.812 14:16:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.812 14:16:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:29.812 14:16:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:29.812 14:16:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:29.812 14:16:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:29.812 14:16:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:29.812 14:16:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.812 14:16:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.812 14:16:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.812 14:16:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:29.812 14:16:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.812 14:16:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.812 14:16:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:29.812 14:16:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.812 14:16:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.812 14:16:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:29.812 14:16:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:29.812 14:16:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.812 14:16:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.812 14:16:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.812 14:16:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.812 14:16:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:29.812 14:16:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.812 14:16:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.812 14:16:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.812 14:16:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:29.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:11:29.812 00:11:29.812 --- 10.0.0.2 ping statistics --- 00:11:29.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.812 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:11:29.812 14:16:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:11:29.812 00:11:29.812 --- 10.0.0.1 ping statistics --- 00:11:29.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.812 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:11:29.812 14:16:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.812 14:16:11 -- nvmf/common.sh@411 -- # return 0 00:11:29.812 14:16:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:29.812 14:16:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.812 14:16:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:29.812 14:16:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:29.812 14:16:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.812 14:16:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:29.812 14:16:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:30.071 14:16:11 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:11:30.071 14:16:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:30.071 14:16:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:30.071 14:16:11 -- common/autotest_common.sh@10 -- # set +x 00:11:30.071 14:16:11 -- nvmf/common.sh@470 -- # nvmfpid=3126177 00:11:30.071 14:16:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:30.071 14:16:11 -- nvmf/common.sh@471 -- # waitforlisten 3126177 00:11:30.071 14:16:11 -- common/autotest_common.sh@817 -- # '[' -z 3126177 ']' 00:11:30.071 14:16:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.071 14:16:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:30.071 14:16:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.071 14:16:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:30.071 14:16:11 -- common/autotest_common.sh@10 -- # set +x 00:11:30.071 [2024-04-26 14:16:11.446202] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:11:30.071 [2024-04-26 14:16:11.446289] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.071 EAL: No free 2048 kB hugepages reported on node 1 00:11:30.071 [2024-04-26 14:16:11.509676] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.071 [2024-04-26 14:16:11.624137] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.071 [2024-04-26 14:16:11.624192] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.071 [2024-04-26 14:16:11.624208] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.071 [2024-04-26 14:16:11.624221] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.071 [2024-04-26 14:16:11.624233] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.071 [2024-04-26 14:16:11.624262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.329 14:16:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:30.329 14:16:11 -- common/autotest_common.sh@850 -- # return 0 00:11:30.329 14:16:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:30.329 14:16:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:30.329 14:16:11 -- common/autotest_common.sh@10 -- # set +x 00:11:30.329 14:16:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.329 14:16:11 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:30.587 [2024-04-26 14:16:12.021167] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.587 14:16:12 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:11:30.587 14:16:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:30.587 14:16:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:30.587 14:16:12 -- common/autotest_common.sh@10 -- # set +x 00:11:30.587 ************************************ 00:11:30.587 START TEST lvs_grow_clean 00:11:30.587 ************************************ 00:11:30.587 14:16:12 -- common/autotest_common.sh@1111 -- # lvs_grow 00:11:30.587 14:16:12 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:30.587 14:16:12 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:30.587 14:16:12 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:30.846 14:16:12 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:30.846 14:16:12 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:30.846 14:16:12 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:30.846 14:16:12 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:30.846 14:16:12 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:30.846 14:16:12 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:31.105 14:16:12 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:31.105 14:16:12 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:31.363 14:16:12 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c1c1027f-7cf3-4061-9ffa-ea1ec6cbb32b 00:11:31.363 14:16:12 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1c1027f-7cf3-4061-9ffa-ea1ec6cbb32b 00:11:31.363 14:16:12 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:31.621 14:16:13 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:31.621 14:16:13 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:31.621 14:16:13 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c1c1027f-7cf3-4061-9ffa-ea1ec6cbb32b lvol 150 00:11:31.891 14:16:13 -- target/nvmf_lvs_grow.sh@33 -- # lvol=556bd26f-9424-4e0b-9844-cdd3f74b46b8 00:11:31.891 14:16:13 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:31.891 14:16:13 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:32.156 [2024-04-26 14:16:13.634335] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:32.156 [2024-04-26 14:16:13.634411] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:32.156 true 00:11:32.156 14:16:13 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1c1027f-7cf3-4061-9ffa-ea1ec6cbb32b 00:11:32.156 14:16:13 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:32.438 14:16:13 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:32.438 14:16:13 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:32.696 14:16:14 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 556bd26f-9424-4e0b-9844-cdd3f74b46b8 00:11:33.262 14:16:14 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:33.262 [2024-04-26 14:16:14.801827] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.262 14:16:14 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:33.520 14:16:15 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3126609 00:11:33.520 14:16:15 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:33.520 14:16:15 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:33.520 14:16:15 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3126609 /var/tmp/bdevperf.sock 00:11:33.520 14:16:15 -- common/autotest_common.sh@817 -- # '[' -z 3126609 ']' 00:11:33.520 14:16:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:33.520 14:16:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:33.520 14:16:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:33.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:33.520 14:16:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:33.520 14:16:15 -- common/autotest_common.sh@10 -- # set +x 00:11:33.779 [2024-04-26 14:16:15.112394] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:11:33.779 [2024-04-26 14:16:15.112497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3126609 ] 00:11:33.779 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.779 [2024-04-26 14:16:15.166992] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.779 [2024-04-26 14:16:15.282064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.038 14:16:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:34.038 14:16:15 -- common/autotest_common.sh@850 -- # return 0 00:11:34.038 14:16:15 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:34.296 Nvme0n1 00:11:34.296 14:16:15 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:34.555 [ 00:11:34.555 { 00:11:34.555 "name": "Nvme0n1", 00:11:34.555 "aliases": [ 00:11:34.555 "556bd26f-9424-4e0b-9844-cdd3f74b46b8" 00:11:34.555 ], 00:11:34.555 "product_name": "NVMe disk", 00:11:34.555 "block_size": 4096, 00:11:34.555 "num_blocks": 38912, 00:11:34.555 "uuid": "556bd26f-9424-4e0b-9844-cdd3f74b46b8", 00:11:34.555 "assigned_rate_limits": { 00:11:34.555 "rw_ios_per_sec": 0, 00:11:34.555 "rw_mbytes_per_sec": 0, 00:11:34.555 "r_mbytes_per_sec": 0, 00:11:34.555 "w_mbytes_per_sec": 0 00:11:34.555 }, 00:11:34.555 "claimed": false, 00:11:34.555 "zoned": false, 00:11:34.555 "supported_io_types": { 00:11:34.555 "read": true, 00:11:34.555 "write": true, 00:11:34.555 "unmap": true, 00:11:34.555 "write_zeroes": true, 00:11:34.555 "flush": true, 00:11:34.555 "reset": true, 00:11:34.555 "compare": true, 00:11:34.555 "compare_and_write": true, 00:11:34.555 "abort": true, 00:11:34.555 "nvme_admin": true, 00:11:34.555 "nvme_io": true 00:11:34.555 }, 00:11:34.555 "memory_domains": [ 00:11:34.555 { 00:11:34.555 "dma_device_id": "system", 00:11:34.555 "dma_device_type": 1 00:11:34.555 } 00:11:34.555 ], 00:11:34.555 "driver_specific": { 00:11:34.555 "nvme": [ 00:11:34.555 { 00:11:34.555 "trid": { 00:11:34.555 "trtype": "TCP", 00:11:34.555 "adrfam": "IPv4", 00:11:34.555 "traddr": "10.0.0.2", 00:11:34.555 "trsvcid": "4420", 00:11:34.555 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:34.555 }, 00:11:34.555 "ctrlr_data": { 00:11:34.555 "cntlid": 1, 00:11:34.555 "vendor_id": "0x8086", 00:11:34.555 "model_number": "SPDK bdev Controller", 00:11:34.555 "serial_number": "SPDK0", 00:11:34.555 "firmware_revision": "24.05", 00:11:34.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:34.555 "oacs": { 00:11:34.555 "security": 0, 00:11:34.555 "format": 0, 00:11:34.555 "firmware": 0, 00:11:34.555 "ns_manage": 0 00:11:34.555 }, 00:11:34.555 "multi_ctrlr": true, 00:11:34.555 "ana_reporting": false 00:11:34.555 }, 00:11:34.555 "vs": { 00:11:34.555 "nvme_version": "1.3" 00:11:34.555 }, 00:11:34.555 "ns_data": { 00:11:34.555 "id": 1, 00:11:34.555 "can_share": true 00:11:34.555 } 00:11:34.555 } 00:11:34.555 ], 00:11:34.555 "mp_policy": "active_passive" 00:11:34.555 } 00:11:34.555 } 00:11:34.555 ] 00:11:34.555 14:16:15 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3126638 00:11:34.555 14:16:15 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:34.555 14:16:15 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:34.555 Running I/O for 10 seconds... 00:11:35.491 Latency(us) 00:11:35.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:35.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:35.491 Nvme0n1 : 1.00 13717.00 53.58 0.00 0.00 0.00 0.00 0.00 00:11:35.491 =================================================================================================================== 00:11:35.491 Total : 13717.00 53.58 0.00 0.00 0.00 0.00 0.00 00:11:35.491 00:11:36.426 14:16:17 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c1c1027f-7cf3-4061-9ffa-ea1ec6cbb32b 00:11:36.684 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.684 Nvme0n1 : 2.00 13814.00 53.96 0.00 0.00 0.00 0.00 0.00 00:11:36.684 =================================================================================================================== 00:11:36.684 Total : 13814.00 53.96 0.00 0.00 0.00 0.00 0.00 00:11:36.684 00:11:36.684 true 00:11:36.684 14:16:18 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1c1027f-7cf3-4061-9ffa-ea1ec6cbb32b 00:11:36.684 14:16:18 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:37.250 14:16:18 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:37.250 14:16:18 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:37.250 14:16:18 -- target/nvmf_lvs_grow.sh@65 -- # wait 3126638 00:11:37.509 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.509 Nvme0n1 : 3.00 13887.33 54.25 0.00 0.00 0.00 0.00 0.00 00:11:37.509 =================================================================================================================== 00:11:37.509 Total : 13887.33 54.25 0.00 0.00 0.00 0.00 0.00 00:11:37.509 00:11:38.885 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:38.885 Nvme0n1 : 4.00 13928.00 54.41 0.00 0.00 0.00 0.00 0.00 00:11:38.885 =================================================================================================================== 00:11:38.885 Total : 13928.00 54.41 0.00 0.00 0.00 0.00 0.00 00:11:38.885 00:11:39.821 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:39.821 Nvme0n1 : 5.00 13966.00 54.55 0.00 0.00 0.00 0.00 0.00 00:11:39.821 =================================================================================================================== 00:11:39.821 Total : 13966.00 54.55 0.00 0.00 0.00 0.00 0.00 00:11:39.821 00:11:40.758 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:40.758 Nvme0n1 : 6.00 14009.00 54.72 0.00 0.00 0.00 0.00 0.00 00:11:40.758 =================================================================================================================== 00:11:40.758 Total : 14009.00 54.72 0.00 0.00 0.00 0.00 0.00 00:11:40.758 00:11:41.692 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:41.692 Nvme0n1 : 7.00 14031.29 54.81 0.00 0.00 0.00 0.00 0.00 00:11:41.692 =================================================================================================================== 00:11:41.692 Total : 14031.29 54.81 0.00 0.00 0.00 0.00 0.00 00:11:41.692 00:11:42.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:42.625 Nvme0n1 : 8.00 14055.38 54.90 0.00 0.00 0.00 0.00 0.00 00:11:42.625 =================================================================================================================== 00:11:42.625 Total : 14055.38 54.90 0.00 0.00 0.00 0.00 0.00 00:11:42.625 00:11:43.560 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:43.560 Nvme0n1 : 9.00 14088.22 55.03 0.00 0.00 0.00 0.00 0.00 00:11:43.560 =================================================================================================================== 00:11:43.561 Total : 14088.22 55.03 0.00 0.00 0.00 0.00 0.00 00:11:43.561 00:11:44.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:44.495 Nvme0n1 : 10.00 14101.80 55.09 0.00 0.00 0.00 0.00 0.00 00:11:44.495 =================================================================================================================== 00:11:44.495 Total : 14101.80 55.09 0.00 0.00 0.00 0.00 0.00 00:11:44.495 00:11:44.495 00:11:44.495 Latency(us) 00:11:44.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:44.495 Nvme0n1 : 10.01 14101.87 55.09 0.00 0.00 9071.90 5048.70 17767.54 00:11:44.495 =================================================================================================================== 00:11:44.495 Total : 14101.87 55.09 0.00 0.00 9071.90 5048.70 17767.54 00:11:44.495 0 00:11:44.753 14:16:26 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3126609 00:11:44.753 14:16:26 -- common/autotest_common.sh@936 -- # '[' -z 3126609 ']' 00:11:44.753 14:16:26 -- common/autotest_common.sh@940 -- # kill -0 3126609 00:11:44.753 14:16:26 -- common/autotest_common.sh@941 -- # uname 00:11:44.753 14:16:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:44.753 14:16:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3126609 00:11:44.753 14:16:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:44.753 14:16:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:44.753 14:16:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3126609' 00:11:44.753 killing process with pid 3126609 00:11:44.753 14:16:26 -- common/autotest_common.sh@955 -- # kill 3126609 00:11:44.753 Received shutdown signal, test time was about 10.000000 seconds 00:11:44.753 00:11:44.753 Latency(us) 00:11:44.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.753 =================================================================================================================== 00:11:44.753 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:44.753 14:16:26 -- common/autotest_common.sh@960 -- # wait 3126609 00:11:44.753 14:16:26 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:45.318 14:16:26 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:11:45.318 14:16:26 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1c1027f-7cf3-4061-9ffa-ea1ec6cbb32b 00:11:45.576 14:16:26 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:11:45.576 14:16:26 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:11:45.576 14:16:26 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:45.834 [2024-04-26 14:16:27.175662] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:45.834 14:16:27 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1c1027f-7cf3-4061-9ffa-ea1ec6cbb32b 00:11:45.834 14:16:27 -- common/autotest_common.sh@638 -- # local es=0 00:11:45.834 14:16:27 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1c1027f-7cf3-4061-9ffa-ea1ec6cbb32b 00:11:45.834 14:16:27 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:45.834 14:16:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:45.834 14:16:27 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:45.834 14:16:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:45.834 14:16:27 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:45.834 14:16:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:45.834 14:16:27 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:45.834 14:16:27 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:45.834 14:16:27 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1c1027f-7cf3-4061-9ffa-ea1ec6cbb32b 00:11:46.091 request: 00:11:46.091 { 00:11:46.091 "uuid": "c1c1027f-7cf3-4061-9ffa-ea1ec6cbb32b", 00:11:46.091 "method": "bdev_lvol_get_lvstores", 00:11:46.091 "req_id": 1 00:11:46.091 } 00:11:46.091 Got JSON-RPC error response 00:11:46.091 response: 00:11:46.091 { 00:11:46.091 "code": -19, 00:11:46.091 "message": "No such device" 00:11:46.091 } 00:11:46.091 14:16:27 -- common/autotest_common.sh@641 -- # es=1 00:11:46.091 14:16:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:46.091 14:16:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:46.091 14:16:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:46.091 14:16:27 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:46.348 aio_bdev 00:11:46.348 14:16:27 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 556bd26f-9424-4e0b-9844-cdd3f74b46b8 00:11:46.348 14:16:27 -- common/autotest_common.sh@885 -- # local bdev_name=556bd26f-9424-4e0b-9844-cdd3f74b46b8 00:11:46.348 14:16:27 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:11:46.348 14:16:27 -- common/autotest_common.sh@887 -- # local i 00:11:46.348 14:16:27 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:11:46.348 14:16:27 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:11:46.348 14:16:27 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:46.604 14:16:27 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 556bd26f-9424-4e0b-9844-cdd3f74b46b8 -t 2000 00:11:46.604 [ 00:11:46.604 { 00:11:46.604 "name": "556bd26f-9424-4e0b-9844-cdd3f74b46b8", 00:11:46.604 "aliases": [ 00:11:46.604 "lvs/lvol" 00:11:46.604 ], 00:11:46.604 "product_name": "Logical Volume", 00:11:46.604 "block_size": 4096, 00:11:46.604 "num_blocks": 38912, 00:11:46.604 "uuid": "556bd26f-9424-4e0b-9844-cdd3f74b46b8", 00:11:46.604 "assigned_rate_limits": { 00:11:46.604 "rw_ios_per_sec": 0, 00:11:46.604 "rw_mbytes_per_sec": 0, 00:11:46.604 "r_mbytes_per_sec": 0, 00:11:46.604 "w_mbytes_per_sec": 0 00:11:46.604 }, 00:11:46.604 "claimed": false, 00:11:46.604 "zoned": false, 00:11:46.604 "supported_io_types": { 00:11:46.604 "read": true, 00:11:46.604 "write": true, 00:11:46.604 "unmap": true, 00:11:46.604 "write_zeroes": true, 00:11:46.604 "flush": false, 00:11:46.604 "reset": true, 00:11:46.604 "compare": false, 00:11:46.604 "compare_and_write": false, 00:11:46.604 "abort": false, 00:11:46.604 "nvme_admin": false, 00:11:46.604 "nvme_io": false 00:11:46.604 }, 00:11:46.604 "driver_specific": { 00:11:46.604 "lvol": { 00:11:46.604 "lvol_store_uuid": "c1c1027f-7cf3-4061-9ffa-ea1ec6cbb32b", 00:11:46.604 "base_bdev": "aio_bdev", 00:11:46.604 "thin_provision": false, 00:11:46.604 "snapshot": false, 00:11:46.604 "clone": false, 00:11:46.604 "esnap_clone": false 00:11:46.605 } 00:11:46.605 } 00:11:46.605 } 00:11:46.605 ] 00:11:46.862 14:16:28 -- common/autotest_common.sh@893 -- # return 0 00:11:46.862 14:16:28 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1c1027f-7cf3-4061-9ffa-ea1ec6cbb32b 00:11:46.862 14:16:28 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:11:47.119 14:16:28 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:11:47.119 14:16:28 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1c1027f-7cf3-4061-9ffa-ea1ec6cbb32b 00:11:47.119 14:16:28 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:11:47.377 14:16:28 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:11:47.377 14:16:28 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 556bd26f-9424-4e0b-9844-cdd3f74b46b8 00:11:47.635 14:16:29 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c1c1027f-7cf3-4061-9ffa-ea1ec6cbb32b 00:11:47.894 14:16:29 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:48.152 14:16:29 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:48.152 00:11:48.152 real 0m17.538s 00:11:48.152 user 0m17.051s 00:11:48.152 sys 0m1.813s 00:11:48.152 14:16:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:48.152 14:16:29 -- common/autotest_common.sh@10 -- # set +x 00:11:48.152 ************************************ 00:11:48.152 END TEST lvs_grow_clean 00:11:48.152 ************************************ 00:11:48.152 14:16:29 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:48.152 14:16:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:48.152 14:16:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:48.152 14:16:29 -- common/autotest_common.sh@10 -- # set +x 00:11:48.410 ************************************ 00:11:48.410 START TEST lvs_grow_dirty 00:11:48.410 ************************************ 00:11:48.410 14:16:29 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:11:48.410 14:16:29 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:48.410 14:16:29 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:48.410 14:16:29 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:48.410 14:16:29 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:48.410 14:16:29 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:48.410 14:16:29 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:48.410 14:16:29 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:48.410 14:16:29 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:48.410 14:16:29 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:48.668 14:16:30 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:48.668 14:16:30 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:48.926 14:16:30 -- target/nvmf_lvs_grow.sh@28 -- # lvs=8d8b0ccf-0e71-458d-9e8b-5048add668b2 00:11:48.926 14:16:30 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d8b0ccf-0e71-458d-9e8b-5048add668b2 00:11:48.926 14:16:30 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:49.184 14:16:30 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:49.184 14:16:30 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:49.184 14:16:30 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8d8b0ccf-0e71-458d-9e8b-5048add668b2 lvol 150 00:11:49.750 14:16:31 -- target/nvmf_lvs_grow.sh@33 -- # lvol=b04f4aa1-b678-4d8a-bacf-fcec03c0144a 00:11:49.750 14:16:31 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:49.750 14:16:31 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:49.750 [2024-04-26 14:16:31.292430] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:49.750 [2024-04-26 14:16:31.292516] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:49.750 true 00:11:49.750 14:16:31 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d8b0ccf-0e71-458d-9e8b-5048add668b2 00:11:49.750 14:16:31 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:50.317 14:16:31 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:50.317 14:16:31 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:50.575 14:16:31 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b04f4aa1-b678-4d8a-bacf-fcec03c0144a 00:11:50.832 14:16:32 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:51.090 14:16:32 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:51.348 14:16:32 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3128281 00:11:51.348 14:16:32 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:51.348 14:16:32 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3128281 /var/tmp/bdevperf.sock 00:11:51.348 14:16:32 -- common/autotest_common.sh@817 -- # '[' -z 3128281 ']' 00:11:51.348 14:16:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:51.348 14:16:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:51.348 14:16:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:51.348 14:16:32 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:51.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:51.348 14:16:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:51.348 14:16:32 -- common/autotest_common.sh@10 -- # set +x 00:11:51.348 [2024-04-26 14:16:32.731340] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:11:51.348 [2024-04-26 14:16:32.731435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3128281 ] 00:11:51.348 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.348 [2024-04-26 14:16:32.786095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.348 [2024-04-26 14:16:32.901232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.606 14:16:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:51.606 14:16:33 -- common/autotest_common.sh@850 -- # return 0 00:11:51.606 14:16:33 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:51.863 Nvme0n1 00:11:51.863 14:16:33 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:52.429 [ 00:11:52.429 { 00:11:52.429 "name": "Nvme0n1", 00:11:52.429 "aliases": [ 00:11:52.429 "b04f4aa1-b678-4d8a-bacf-fcec03c0144a" 00:11:52.429 ], 00:11:52.429 "product_name": "NVMe disk", 00:11:52.429 "block_size": 4096, 00:11:52.429 "num_blocks": 38912, 00:11:52.429 "uuid": "b04f4aa1-b678-4d8a-bacf-fcec03c0144a", 00:11:52.429 "assigned_rate_limits": { 00:11:52.429 "rw_ios_per_sec": 0, 00:11:52.429 "rw_mbytes_per_sec": 0, 00:11:52.429 "r_mbytes_per_sec": 0, 00:11:52.429 "w_mbytes_per_sec": 0 00:11:52.429 }, 00:11:52.429 "claimed": false, 00:11:52.429 "zoned": false, 00:11:52.429 "supported_io_types": { 00:11:52.429 "read": true, 00:11:52.429 "write": true, 00:11:52.429 "unmap": true, 00:11:52.429 "write_zeroes": true, 00:11:52.429 "flush": true, 00:11:52.429 "reset": true, 00:11:52.429 "compare": true, 00:11:52.429 "compare_and_write": true, 00:11:52.429 "abort": true, 00:11:52.429 "nvme_admin": true, 00:11:52.429 "nvme_io": true 00:11:52.429 }, 00:11:52.429 "memory_domains": [ 00:11:52.429 { 00:11:52.429 "dma_device_id": "system", 00:11:52.429 "dma_device_type": 1 00:11:52.429 } 00:11:52.429 ], 00:11:52.429 "driver_specific": { 00:11:52.429 "nvme": [ 00:11:52.429 { 00:11:52.429 "trid": { 00:11:52.429 "trtype": "TCP", 00:11:52.429 "adrfam": "IPv4", 00:11:52.429 "traddr": "10.0.0.2", 00:11:52.429 "trsvcid": "4420", 00:11:52.429 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:52.429 }, 00:11:52.429 "ctrlr_data": { 00:11:52.429 "cntlid": 1, 00:11:52.429 "vendor_id": "0x8086", 00:11:52.429 "model_number": "SPDK bdev Controller", 00:11:52.429 "serial_number": "SPDK0", 00:11:52.429 "firmware_revision": "24.05", 00:11:52.429 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:52.429 "oacs": { 00:11:52.429 "security": 0, 00:11:52.429 "format": 0, 00:11:52.429 "firmware": 0, 00:11:52.429 "ns_manage": 0 00:11:52.429 }, 00:11:52.429 "multi_ctrlr": true, 00:11:52.429 "ana_reporting": false 00:11:52.429 }, 00:11:52.429 "vs": { 00:11:52.429 "nvme_version": "1.3" 00:11:52.429 }, 00:11:52.429 "ns_data": { 00:11:52.429 "id": 1, 00:11:52.429 "can_share": true 00:11:52.429 } 00:11:52.429 } 00:11:52.429 ], 00:11:52.429 "mp_policy": "active_passive" 00:11:52.429 } 00:11:52.429 } 00:11:52.429 ] 00:11:52.429 14:16:33 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3128387 00:11:52.429 14:16:33 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:52.429 14:16:33 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:52.429 Running I/O for 10 seconds... 00:11:53.366 Latency(us) 00:11:53.366 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:53.366 Nvme0n1 : 1.00 12828.00 50.11 0.00 0.00 0.00 0.00 0.00 00:11:53.366 =================================================================================================================== 00:11:53.366 Total : 12828.00 50.11 0.00 0.00 0.00 0.00 0.00 00:11:53.366 00:11:54.297 14:16:35 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8d8b0ccf-0e71-458d-9e8b-5048add668b2 00:11:54.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:54.297 Nvme0n1 : 2.00 12954.50 50.60 0.00 0.00 0.00 0.00 0.00 00:11:54.297 =================================================================================================================== 00:11:54.297 Total : 12954.50 50.60 0.00 0.00 0.00 0.00 0.00 00:11:54.297 00:11:54.555 true 00:11:54.555 14:16:36 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d8b0ccf-0e71-458d-9e8b-5048add668b2 00:11:54.555 14:16:36 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:54.812 14:16:36 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:54.812 14:16:36 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:54.812 14:16:36 -- target/nvmf_lvs_grow.sh@65 -- # wait 3128387 00:11:55.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:55.378 Nvme0n1 : 3.00 12996.67 50.77 0.00 0.00 0.00 0.00 0.00 00:11:55.378 =================================================================================================================== 00:11:55.378 Total : 12996.67 50.77 0.00 0.00 0.00 0.00 0.00 00:11:55.378 00:11:56.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.390 Nvme0n1 : 4.00 13049.50 50.97 0.00 0.00 0.00 0.00 0.00 00:11:56.390 =================================================================================================================== 00:11:56.390 Total : 13049.50 50.97 0.00 0.00 0.00 0.00 0.00 00:11:56.390 00:11:57.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:57.326 Nvme0n1 : 5.00 13081.20 51.10 0.00 0.00 0.00 0.00 0.00 00:11:57.326 =================================================================================================================== 00:11:57.326 Total : 13081.20 51.10 0.00 0.00 0.00 0.00 0.00 00:11:57.326 00:11:58.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:58.701 Nvme0n1 : 6.00 13102.33 51.18 0.00 0.00 0.00 0.00 0.00 00:11:58.701 =================================================================================================================== 00:11:58.701 Total : 13102.33 51.18 0.00 0.00 0.00 0.00 0.00 00:11:58.701 00:11:59.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.635 Nvme0n1 : 7.00 13117.43 51.24 0.00 0.00 0.00 0.00 0.00 00:11:59.635 =================================================================================================================== 00:11:59.635 Total : 13117.43 51.24 0.00 0.00 0.00 0.00 0.00 00:11:59.635 00:12:00.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.569 Nvme0n1 : 8.00 13144.62 51.35 0.00 0.00 0.00 0.00 0.00 00:12:00.569 =================================================================================================================== 00:12:00.569 Total : 13144.62 51.35 0.00 0.00 0.00 0.00 0.00 00:12:00.569 00:12:01.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:01.504 Nvme0n1 : 9.00 13165.78 51.43 0.00 0.00 0.00 0.00 0.00 00:12:01.504 =================================================================================================================== 00:12:01.504 Total : 13165.78 51.43 0.00 0.00 0.00 0.00 0.00 00:12:01.504 00:12:02.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.438 Nvme0n1 : 10.00 13182.70 51.49 0.00 0.00 0.00 0.00 0.00 00:12:02.438 =================================================================================================================== 00:12:02.438 Total : 13182.70 51.49 0.00 0.00 0.00 0.00 0.00 00:12:02.438 00:12:02.438 00:12:02.438 Latency(us) 00:12:02.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.438 Nvme0n1 : 10.01 13186.05 51.51 0.00 0.00 9701.57 7475.96 18252.99 00:12:02.438 =================================================================================================================== 00:12:02.438 Total : 13186.05 51.51 0.00 0.00 9701.57 7475.96 18252.99 00:12:02.438 0 00:12:02.438 14:16:43 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3128281 00:12:02.438 14:16:43 -- common/autotest_common.sh@936 -- # '[' -z 3128281 ']' 00:12:02.438 14:16:43 -- common/autotest_common.sh@940 -- # kill -0 3128281 00:12:02.438 14:16:43 -- common/autotest_common.sh@941 -- # uname 00:12:02.438 14:16:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:02.438 14:16:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3128281 00:12:02.438 14:16:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:02.438 14:16:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:02.438 14:16:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3128281' 00:12:02.438 killing process with pid 3128281 00:12:02.438 14:16:43 -- common/autotest_common.sh@955 -- # kill 3128281 00:12:02.438 Received shutdown signal, test time was about 10.000000 seconds 00:12:02.438 00:12:02.438 Latency(us) 00:12:02.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.438 =================================================================================================================== 00:12:02.439 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:02.439 14:16:43 -- common/autotest_common.sh@960 -- # wait 3128281 00:12:02.697 14:16:44 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:02.955 14:16:44 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d8b0ccf-0e71-458d-9e8b-5048add668b2 00:12:02.955 14:16:44 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:12:03.213 14:16:44 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:12:03.213 14:16:44 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:12:03.213 14:16:44 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3126177 00:12:03.214 14:16:44 -- target/nvmf_lvs_grow.sh@74 -- # wait 3126177 00:12:03.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3126177 Killed "${NVMF_APP[@]}" "$@" 00:12:03.214 14:16:44 -- target/nvmf_lvs_grow.sh@74 -- # true 00:12:03.214 14:16:44 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:12:03.214 14:16:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:03.214 14:16:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:03.214 14:16:44 -- common/autotest_common.sh@10 -- # set +x 00:12:03.214 14:16:44 -- nvmf/common.sh@470 -- # nvmfpid=3129400 00:12:03.214 14:16:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:03.214 14:16:44 -- nvmf/common.sh@471 -- # waitforlisten 3129400 00:12:03.214 14:16:44 -- common/autotest_common.sh@817 -- # '[' -z 3129400 ']' 00:12:03.214 14:16:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.214 14:16:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:03.214 14:16:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.214 14:16:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:03.214 14:16:44 -- common/autotest_common.sh@10 -- # set +x 00:12:03.472 [2024-04-26 14:16:44.799182] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:12:03.472 [2024-04-26 14:16:44.799267] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.472 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.472 [2024-04-26 14:16:44.864823] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.472 [2024-04-26 14:16:44.978627] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.473 [2024-04-26 14:16:44.978700] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.473 [2024-04-26 14:16:44.978717] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.473 [2024-04-26 14:16:44.978731] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.473 [2024-04-26 14:16:44.978743] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.473 [2024-04-26 14:16:44.978773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.731 14:16:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:03.731 14:16:45 -- common/autotest_common.sh@850 -- # return 0 00:12:03.731 14:16:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:03.731 14:16:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:03.731 14:16:45 -- common/autotest_common.sh@10 -- # set +x 00:12:03.731 14:16:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.731 14:16:45 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:03.989 [2024-04-26 14:16:45.381953] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:03.989 [2024-04-26 14:16:45.382090] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:03.989 [2024-04-26 14:16:45.382145] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:03.989 14:16:45 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:12:03.989 14:16:45 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev b04f4aa1-b678-4d8a-bacf-fcec03c0144a 00:12:03.989 14:16:45 -- common/autotest_common.sh@885 -- # local bdev_name=b04f4aa1-b678-4d8a-bacf-fcec03c0144a 00:12:03.989 14:16:45 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:03.989 14:16:45 -- common/autotest_common.sh@887 -- # local i 00:12:03.989 14:16:45 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:03.989 14:16:45 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:03.989 14:16:45 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:04.247 14:16:45 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b04f4aa1-b678-4d8a-bacf-fcec03c0144a -t 2000 00:12:04.505 [ 00:12:04.505 { 00:12:04.505 "name": "b04f4aa1-b678-4d8a-bacf-fcec03c0144a", 00:12:04.505 "aliases": [ 00:12:04.505 "lvs/lvol" 00:12:04.505 ], 00:12:04.505 "product_name": "Logical Volume", 00:12:04.505 "block_size": 4096, 00:12:04.505 "num_blocks": 38912, 00:12:04.505 "uuid": "b04f4aa1-b678-4d8a-bacf-fcec03c0144a", 00:12:04.505 "assigned_rate_limits": { 00:12:04.505 "rw_ios_per_sec": 0, 00:12:04.505 "rw_mbytes_per_sec": 0, 00:12:04.505 "r_mbytes_per_sec": 0, 00:12:04.505 "w_mbytes_per_sec": 0 00:12:04.505 }, 00:12:04.505 "claimed": false, 00:12:04.505 "zoned": false, 00:12:04.505 "supported_io_types": { 00:12:04.505 "read": true, 00:12:04.505 "write": true, 00:12:04.505 "unmap": true, 00:12:04.505 "write_zeroes": true, 00:12:04.505 "flush": false, 00:12:04.505 "reset": true, 00:12:04.505 "compare": false, 00:12:04.505 "compare_and_write": false, 00:12:04.505 "abort": false, 00:12:04.505 "nvme_admin": false, 00:12:04.505 "nvme_io": false 00:12:04.505 }, 00:12:04.505 "driver_specific": { 00:12:04.505 "lvol": { 00:12:04.505 "lvol_store_uuid": "8d8b0ccf-0e71-458d-9e8b-5048add668b2", 00:12:04.505 "base_bdev": "aio_bdev", 00:12:04.505 "thin_provision": false, 00:12:04.505 "snapshot": false, 00:12:04.505 "clone": false, 00:12:04.505 "esnap_clone": false 00:12:04.505 } 00:12:04.505 } 00:12:04.505 } 00:12:04.505 ] 00:12:04.505 14:16:45 -- common/autotest_common.sh@893 -- # return 0 00:12:04.505 14:16:45 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:12:04.505 14:16:45 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d8b0ccf-0e71-458d-9e8b-5048add668b2 00:12:04.763 14:16:46 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:12:04.763 14:16:46 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d8b0ccf-0e71-458d-9e8b-5048add668b2 00:12:04.763 14:16:46 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:12:05.020 14:16:46 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:12:05.021 14:16:46 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:05.279 [2024-04-26 14:16:46.787391] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:05.279 14:16:46 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d8b0ccf-0e71-458d-9e8b-5048add668b2 00:12:05.279 14:16:46 -- common/autotest_common.sh@638 -- # local es=0 00:12:05.279 14:16:46 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d8b0ccf-0e71-458d-9e8b-5048add668b2 00:12:05.279 14:16:46 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.279 14:16:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:05.279 14:16:46 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.279 14:16:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:05.279 14:16:46 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.279 14:16:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:05.279 14:16:46 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.279 14:16:46 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:05.279 14:16:46 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d8b0ccf-0e71-458d-9e8b-5048add668b2 00:12:05.538 request: 00:12:05.538 { 00:12:05.538 "uuid": "8d8b0ccf-0e71-458d-9e8b-5048add668b2", 00:12:05.538 "method": "bdev_lvol_get_lvstores", 00:12:05.538 "req_id": 1 00:12:05.538 } 00:12:05.538 Got JSON-RPC error response 00:12:05.538 response: 00:12:05.538 { 00:12:05.538 "code": -19, 00:12:05.538 "message": "No such device" 00:12:05.538 } 00:12:05.538 14:16:47 -- common/autotest_common.sh@641 -- # es=1 00:12:05.538 14:16:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:05.538 14:16:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:05.538 14:16:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:05.538 14:16:47 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:05.796 aio_bdev 00:12:05.796 14:16:47 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev b04f4aa1-b678-4d8a-bacf-fcec03c0144a 00:12:05.796 14:16:47 -- common/autotest_common.sh@885 -- # local bdev_name=b04f4aa1-b678-4d8a-bacf-fcec03c0144a 00:12:05.796 14:16:47 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:05.796 14:16:47 -- common/autotest_common.sh@887 -- # local i 00:12:05.796 14:16:47 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:05.796 14:16:47 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:05.796 14:16:47 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:06.054 14:16:47 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b04f4aa1-b678-4d8a-bacf-fcec03c0144a -t 2000 00:12:06.312 [ 00:12:06.312 { 00:12:06.312 "name": "b04f4aa1-b678-4d8a-bacf-fcec03c0144a", 00:12:06.312 "aliases": [ 00:12:06.312 "lvs/lvol" 00:12:06.312 ], 00:12:06.312 "product_name": "Logical Volume", 00:12:06.312 "block_size": 4096, 00:12:06.312 "num_blocks": 38912, 00:12:06.312 "uuid": "b04f4aa1-b678-4d8a-bacf-fcec03c0144a", 00:12:06.312 "assigned_rate_limits": { 00:12:06.312 "rw_ios_per_sec": 0, 00:12:06.312 "rw_mbytes_per_sec": 0, 00:12:06.312 "r_mbytes_per_sec": 0, 00:12:06.312 "w_mbytes_per_sec": 0 00:12:06.312 }, 00:12:06.312 "claimed": false, 00:12:06.312 "zoned": false, 00:12:06.312 "supported_io_types": { 00:12:06.312 "read": true, 00:12:06.312 "write": true, 00:12:06.312 "unmap": true, 00:12:06.312 "write_zeroes": true, 00:12:06.312 "flush": false, 00:12:06.312 "reset": true, 00:12:06.312 "compare": false, 00:12:06.312 "compare_and_write": false, 00:12:06.312 "abort": false, 00:12:06.312 "nvme_admin": false, 00:12:06.312 "nvme_io": false 00:12:06.312 }, 00:12:06.312 "driver_specific": { 00:12:06.312 "lvol": { 00:12:06.312 "lvol_store_uuid": "8d8b0ccf-0e71-458d-9e8b-5048add668b2", 00:12:06.312 "base_bdev": "aio_bdev", 00:12:06.312 "thin_provision": false, 00:12:06.312 "snapshot": false, 00:12:06.312 "clone": false, 00:12:06.312 "esnap_clone": false 00:12:06.312 } 00:12:06.312 } 00:12:06.312 } 00:12:06.312 ] 00:12:06.312 14:16:47 -- common/autotest_common.sh@893 -- # return 0 00:12:06.312 14:16:47 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d8b0ccf-0e71-458d-9e8b-5048add668b2 00:12:06.312 14:16:47 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:12:06.570 14:16:48 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:12:06.570 14:16:48 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8d8b0ccf-0e71-458d-9e8b-5048add668b2 00:12:06.570 14:16:48 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:12:06.828 14:16:48 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:12:06.828 14:16:48 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b04f4aa1-b678-4d8a-bacf-fcec03c0144a 00:12:07.087 14:16:48 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8d8b0ccf-0e71-458d-9e8b-5048add668b2 00:12:07.345 14:16:48 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:07.604 14:16:49 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:07.604 00:12:07.604 real 0m19.253s 00:12:07.604 user 0m48.241s 00:12:07.604 sys 0m4.769s 00:12:07.604 14:16:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:07.604 14:16:49 -- common/autotest_common.sh@10 -- # set +x 00:12:07.604 ************************************ 00:12:07.604 END TEST lvs_grow_dirty 00:12:07.604 ************************************ 00:12:07.604 14:16:49 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:07.604 14:16:49 -- common/autotest_common.sh@794 -- # type=--id 00:12:07.604 14:16:49 -- common/autotest_common.sh@795 -- # id=0 00:12:07.604 14:16:49 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:12:07.604 14:16:49 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:07.604 14:16:49 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:12:07.604 14:16:49 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:12:07.604 14:16:49 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:12:07.604 14:16:49 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:07.604 nvmf_trace.0 00:12:07.604 14:16:49 -- common/autotest_common.sh@809 -- # return 0 00:12:07.604 14:16:49 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:07.604 14:16:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:07.604 14:16:49 -- nvmf/common.sh@117 -- # sync 00:12:07.604 14:16:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:07.604 14:16:49 -- nvmf/common.sh@120 -- # set +e 00:12:07.604 14:16:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:07.604 14:16:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:07.604 rmmod nvme_tcp 00:12:07.604 rmmod nvme_fabrics 00:12:07.604 rmmod nvme_keyring 00:12:07.604 14:16:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:07.604 14:16:49 -- nvmf/common.sh@124 -- # set -e 00:12:07.604 14:16:49 -- nvmf/common.sh@125 -- # return 0 00:12:07.604 14:16:49 -- nvmf/common.sh@478 -- # '[' -n 3129400 ']' 00:12:07.604 14:16:49 -- nvmf/common.sh@479 -- # killprocess 3129400 00:12:07.604 14:16:49 -- common/autotest_common.sh@936 -- # '[' -z 3129400 ']' 00:12:07.604 14:16:49 -- common/autotest_common.sh@940 -- # kill -0 3129400 00:12:07.604 14:16:49 -- common/autotest_common.sh@941 -- # uname 00:12:07.604 14:16:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:07.604 14:16:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3129400 00:12:07.864 14:16:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:07.864 14:16:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:07.864 14:16:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3129400' 00:12:07.864 killing process with pid 3129400 00:12:07.864 14:16:49 -- common/autotest_common.sh@955 -- # kill 3129400 00:12:07.864 14:16:49 -- common/autotest_common.sh@960 -- # wait 3129400 00:12:07.864 14:16:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:07.864 14:16:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:07.864 14:16:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:07.864 14:16:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:07.864 14:16:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:07.864 14:16:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.864 14:16:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.864 14:16:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.405 14:16:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:10.405 00:12:10.405 real 0m41.913s 00:12:10.405 user 1m10.984s 00:12:10.405 sys 0m8.265s 00:12:10.405 14:16:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:10.405 14:16:51 -- common/autotest_common.sh@10 -- # set +x 00:12:10.405 ************************************ 00:12:10.405 END TEST nvmf_lvs_grow 00:12:10.405 ************************************ 00:12:10.405 14:16:51 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:10.405 14:16:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:10.405 14:16:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:10.405 14:16:51 -- common/autotest_common.sh@10 -- # set +x 00:12:10.405 ************************************ 00:12:10.405 START TEST nvmf_bdev_io_wait 00:12:10.405 ************************************ 00:12:10.405 14:16:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:10.405 * Looking for test storage... 00:12:10.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.405 14:16:51 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.405 14:16:51 -- nvmf/common.sh@7 -- # uname -s 00:12:10.405 14:16:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.405 14:16:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.405 14:16:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.405 14:16:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.405 14:16:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.405 14:16:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.405 14:16:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.405 14:16:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.405 14:16:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.405 14:16:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.405 14:16:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:10.405 14:16:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:10.405 14:16:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.405 14:16:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.405 14:16:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.405 14:16:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.405 14:16:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.405 14:16:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.405 14:16:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.405 14:16:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.405 14:16:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.405 14:16:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.405 14:16:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.405 14:16:51 -- paths/export.sh@5 -- # export PATH 00:12:10.405 14:16:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.405 14:16:51 -- nvmf/common.sh@47 -- # : 0 00:12:10.405 14:16:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:10.405 14:16:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:10.405 14:16:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.406 14:16:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.406 14:16:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.406 14:16:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:10.406 14:16:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:10.406 14:16:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:10.406 14:16:51 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:10.406 14:16:51 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:10.406 14:16:51 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:10.406 14:16:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:10.406 14:16:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.406 14:16:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:10.406 14:16:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:10.406 14:16:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:10.406 14:16:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.406 14:16:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:10.406 14:16:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.406 14:16:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:10.406 14:16:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:10.406 14:16:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:10.406 14:16:51 -- common/autotest_common.sh@10 -- # set +x 00:12:11.785 14:16:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:11.785 14:16:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:11.785 14:16:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:11.785 14:16:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:11.785 14:16:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:11.785 14:16:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:11.785 14:16:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:11.785 14:16:53 -- nvmf/common.sh@295 -- # net_devs=() 00:12:11.785 14:16:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:11.785 14:16:53 -- nvmf/common.sh@296 -- # e810=() 00:12:11.785 14:16:53 -- nvmf/common.sh@296 -- # local -ga e810 00:12:11.785 14:16:53 -- nvmf/common.sh@297 -- # x722=() 00:12:11.785 14:16:53 -- nvmf/common.sh@297 -- # local -ga x722 00:12:11.785 14:16:53 -- nvmf/common.sh@298 -- # mlx=() 00:12:11.785 14:16:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:11.785 14:16:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:11.785 14:16:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:11.785 14:16:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:11.785 14:16:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:11.785 14:16:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:11.785 14:16:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:11.785 14:16:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:11.785 14:16:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.785 14:16:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.785 14:16:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.785 14:16:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.785 14:16:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:11.785 14:16:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:11.785 14:16:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:11.785 14:16:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:11.785 14:16:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:11.785 14:16:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:11.785 14:16:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:11.785 14:16:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:12:11.785 Found 0000:08:00.0 (0x8086 - 0x159b) 00:12:11.785 14:16:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:11.785 14:16:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:11.785 14:16:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.785 14:16:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.785 14:16:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:11.785 14:16:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:11.785 14:16:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:12:11.785 Found 0000:08:00.1 (0x8086 - 0x159b) 00:12:11.785 14:16:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:11.785 14:16:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:11.785 14:16:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.785 14:16:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.785 14:16:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:11.785 14:16:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:11.785 14:16:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:11.785 14:16:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:11.785 14:16:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:11.785 14:16:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.785 14:16:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:11.785 14:16:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.785 14:16:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:12:11.785 Found net devices under 0000:08:00.0: cvl_0_0 00:12:11.785 14:16:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.785 14:16:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:11.785 14:16:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.785 14:16:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:11.785 14:16:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.785 14:16:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:12:11.785 Found net devices under 0000:08:00.1: cvl_0_1 00:12:11.785 14:16:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.785 14:16:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:11.785 14:16:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:11.785 14:16:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:11.785 14:16:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:11.785 14:16:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:11.785 14:16:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.785 14:16:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.785 14:16:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:11.785 14:16:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:11.785 14:16:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:11.785 14:16:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:11.785 14:16:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:11.785 14:16:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:11.785 14:16:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.785 14:16:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:11.785 14:16:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:11.785 14:16:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:11.785 14:16:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.785 14:16:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.785 14:16:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.785 14:16:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:11.785 14:16:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.785 14:16:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.785 14:16:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:12.043 14:16:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:12.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:12:12.043 00:12:12.043 --- 10.0.0.2 ping statistics --- 00:12:12.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.043 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:12:12.043 14:16:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:12.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:12:12.043 00:12:12.043 --- 10.0.0.1 ping statistics --- 00:12:12.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.043 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:12:12.043 14:16:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.043 14:16:53 -- nvmf/common.sh@411 -- # return 0 00:12:12.043 14:16:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:12.043 14:16:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.043 14:16:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:12.043 14:16:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:12.043 14:16:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.043 14:16:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:12.043 14:16:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:12.043 14:16:53 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:12.043 14:16:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:12.043 14:16:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:12.043 14:16:53 -- common/autotest_common.sh@10 -- # set +x 00:12:12.043 14:16:53 -- nvmf/common.sh@470 -- # nvmfpid=3131347 00:12:12.043 14:16:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:12.043 14:16:53 -- nvmf/common.sh@471 -- # waitforlisten 3131347 00:12:12.043 14:16:53 -- common/autotest_common.sh@817 -- # '[' -z 3131347 ']' 00:12:12.043 14:16:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.043 14:16:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:12.043 14:16:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.043 14:16:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:12.043 14:16:53 -- common/autotest_common.sh@10 -- # set +x 00:12:12.043 [2024-04-26 14:16:53.433086] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:12:12.043 [2024-04-26 14:16:53.433174] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.043 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.043 [2024-04-26 14:16:53.497594] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.302 [2024-04-26 14:16:53.616629] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.302 [2024-04-26 14:16:53.616689] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.302 [2024-04-26 14:16:53.616705] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.302 [2024-04-26 14:16:53.616718] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.302 [2024-04-26 14:16:53.616730] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.302 [2024-04-26 14:16:53.616823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.302 [2024-04-26 14:16:53.616909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.302 [2024-04-26 14:16:53.616968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.302 [2024-04-26 14:16:53.616973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.302 14:16:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:12.302 14:16:53 -- common/autotest_common.sh@850 -- # return 0 00:12:12.302 14:16:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:12.302 14:16:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:12.302 14:16:53 -- common/autotest_common.sh@10 -- # set +x 00:12:12.302 14:16:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.302 14:16:53 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:12.302 14:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.302 14:16:53 -- common/autotest_common.sh@10 -- # set +x 00:12:12.302 14:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.302 14:16:53 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:12.302 14:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.302 14:16:53 -- common/autotest_common.sh@10 -- # set +x 00:12:12.302 14:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.302 14:16:53 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:12.302 14:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.302 14:16:53 -- common/autotest_common.sh@10 -- # set +x 00:12:12.302 [2024-04-26 14:16:53.802077] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.302 14:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.302 14:16:53 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:12.302 14:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.302 14:16:53 -- common/autotest_common.sh@10 -- # set +x 00:12:12.302 Malloc0 00:12:12.302 14:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.302 14:16:53 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:12.302 14:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.302 14:16:53 -- common/autotest_common.sh@10 -- # set +x 00:12:12.302 14:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.302 14:16:53 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:12.302 14:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.302 14:16:53 -- common/autotest_common.sh@10 -- # set +x 00:12:12.302 14:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.302 14:16:53 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.302 14:16:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:12.302 14:16:53 -- common/autotest_common.sh@10 -- # set +x 00:12:12.302 [2024-04-26 14:16:53.869948] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.560 14:16:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:12.560 14:16:53 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3131400 00:12:12.560 14:16:53 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:12.560 14:16:53 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:12.560 14:16:53 -- target/bdev_io_wait.sh@30 -- # READ_PID=3131402 00:12:12.561 14:16:53 -- nvmf/common.sh@521 -- # config=() 00:12:12.561 14:16:53 -- nvmf/common.sh@521 -- # local subsystem config 00:12:12.561 14:16:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:12.561 14:16:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:12.561 { 00:12:12.561 "params": { 00:12:12.561 "name": "Nvme$subsystem", 00:12:12.561 "trtype": "$TEST_TRANSPORT", 00:12:12.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:12.561 "adrfam": "ipv4", 00:12:12.561 "trsvcid": "$NVMF_PORT", 00:12:12.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:12.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:12.561 "hdgst": ${hdgst:-false}, 00:12:12.561 "ddgst": ${ddgst:-false} 00:12:12.561 }, 00:12:12.561 "method": "bdev_nvme_attach_controller" 00:12:12.561 } 00:12:12.561 EOF 00:12:12.561 )") 00:12:12.561 14:16:53 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:12.561 14:16:53 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:12.561 14:16:53 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3131404 00:12:12.561 14:16:53 -- nvmf/common.sh@521 -- # config=() 00:12:12.561 14:16:53 -- nvmf/common.sh@521 -- # local subsystem config 00:12:12.561 14:16:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:12.561 14:16:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:12.561 { 00:12:12.561 "params": { 00:12:12.561 "name": "Nvme$subsystem", 00:12:12.561 "trtype": "$TEST_TRANSPORT", 00:12:12.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:12.561 "adrfam": "ipv4", 00:12:12.561 "trsvcid": "$NVMF_PORT", 00:12:12.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:12.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:12.561 "hdgst": ${hdgst:-false}, 00:12:12.561 "ddgst": ${ddgst:-false} 00:12:12.561 }, 00:12:12.561 "method": "bdev_nvme_attach_controller" 00:12:12.561 } 00:12:12.561 EOF 00:12:12.561 )") 00:12:12.561 14:16:53 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:12.561 14:16:53 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3131407 00:12:12.561 14:16:53 -- nvmf/common.sh@543 -- # cat 00:12:12.561 14:16:53 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:12.561 14:16:53 -- target/bdev_io_wait.sh@35 -- # sync 00:12:12.561 14:16:53 -- nvmf/common.sh@521 -- # config=() 00:12:12.561 14:16:53 -- nvmf/common.sh@521 -- # local subsystem config 00:12:12.561 14:16:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:12.561 14:16:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:12.561 { 00:12:12.561 "params": { 00:12:12.561 "name": "Nvme$subsystem", 00:12:12.561 "trtype": "$TEST_TRANSPORT", 00:12:12.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:12.561 "adrfam": "ipv4", 00:12:12.561 "trsvcid": "$NVMF_PORT", 00:12:12.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:12.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:12.561 "hdgst": ${hdgst:-false}, 00:12:12.561 "ddgst": ${ddgst:-false} 00:12:12.561 }, 00:12:12.561 "method": "bdev_nvme_attach_controller" 00:12:12.561 } 00:12:12.561 EOF 00:12:12.561 )") 00:12:12.561 14:16:53 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:12.561 14:16:53 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:12.561 14:16:53 -- nvmf/common.sh@543 -- # cat 00:12:12.561 14:16:53 -- nvmf/common.sh@521 -- # config=() 00:12:12.561 14:16:53 -- nvmf/common.sh@521 -- # local subsystem config 00:12:12.561 14:16:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:12.561 14:16:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:12.561 { 00:12:12.561 "params": { 00:12:12.561 "name": "Nvme$subsystem", 00:12:12.561 "trtype": "$TEST_TRANSPORT", 00:12:12.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:12.561 "adrfam": "ipv4", 00:12:12.561 "trsvcid": "$NVMF_PORT", 00:12:12.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:12.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:12.561 "hdgst": ${hdgst:-false}, 00:12:12.561 "ddgst": ${ddgst:-false} 00:12:12.561 }, 00:12:12.561 "method": "bdev_nvme_attach_controller" 00:12:12.561 } 00:12:12.561 EOF 00:12:12.561 )") 00:12:12.561 14:16:53 -- nvmf/common.sh@543 -- # cat 00:12:12.561 14:16:53 -- target/bdev_io_wait.sh@37 -- # wait 3131400 00:12:12.561 14:16:53 -- nvmf/common.sh@543 -- # cat 00:12:12.561 14:16:53 -- nvmf/common.sh@545 -- # jq . 00:12:12.561 14:16:53 -- nvmf/common.sh@545 -- # jq . 00:12:12.561 14:16:53 -- nvmf/common.sh@545 -- # jq . 00:12:12.561 14:16:53 -- nvmf/common.sh@546 -- # IFS=, 00:12:12.561 14:16:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:12.561 "params": { 00:12:12.561 "name": "Nvme1", 00:12:12.561 "trtype": "tcp", 00:12:12.561 "traddr": "10.0.0.2", 00:12:12.561 "adrfam": "ipv4", 00:12:12.561 "trsvcid": "4420", 00:12:12.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:12.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:12.561 "hdgst": false, 00:12:12.561 "ddgst": false 00:12:12.561 }, 00:12:12.561 "method": "bdev_nvme_attach_controller" 00:12:12.561 }' 00:12:12.561 14:16:53 -- nvmf/common.sh@546 -- # IFS=, 00:12:12.561 14:16:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:12.561 "params": { 00:12:12.561 "name": "Nvme1", 00:12:12.561 "trtype": "tcp", 00:12:12.561 "traddr": "10.0.0.2", 00:12:12.561 "adrfam": "ipv4", 00:12:12.561 "trsvcid": "4420", 00:12:12.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:12.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:12.561 "hdgst": false, 00:12:12.561 "ddgst": false 00:12:12.561 }, 00:12:12.561 "method": "bdev_nvme_attach_controller" 00:12:12.561 }' 00:12:12.561 14:16:53 -- nvmf/common.sh@545 -- # jq . 00:12:12.561 14:16:53 -- nvmf/common.sh@546 -- # IFS=, 00:12:12.561 14:16:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:12.561 "params": { 00:12:12.561 "name": "Nvme1", 00:12:12.561 "trtype": "tcp", 00:12:12.561 "traddr": "10.0.0.2", 00:12:12.561 "adrfam": "ipv4", 00:12:12.561 "trsvcid": "4420", 00:12:12.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:12.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:12.561 "hdgst": false, 00:12:12.561 "ddgst": false 00:12:12.561 }, 00:12:12.561 "method": "bdev_nvme_attach_controller" 00:12:12.561 }' 00:12:12.561 14:16:53 -- nvmf/common.sh@546 -- # IFS=, 00:12:12.561 14:16:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:12.561 "params": { 00:12:12.561 "name": "Nvme1", 00:12:12.561 "trtype": "tcp", 00:12:12.561 "traddr": "10.0.0.2", 00:12:12.561 "adrfam": "ipv4", 00:12:12.561 "trsvcid": "4420", 00:12:12.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:12.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:12.561 "hdgst": false, 00:12:12.561 "ddgst": false 00:12:12.561 }, 00:12:12.561 "method": "bdev_nvme_attach_controller" 00:12:12.561 }' 00:12:12.561 [2024-04-26 14:16:53.919699] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:12:12.561 [2024-04-26 14:16:53.919696] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:12:12.561 [2024-04-26 14:16:53.919798] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-26 14:16:53.919798] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:12.561 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:12.561 [2024-04-26 14:16:53.921034] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:12:12.561 [2024-04-26 14:16:53.921043] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:12:12.561 [2024-04-26 14:16:53.921119] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-26 14:16:53.921120] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:12.561 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:12.561 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.561 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.561 [2024-04-26 14:16:54.064984] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.561 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.819 [2024-04-26 14:16:54.135665] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.819 [2024-04-26 14:16:54.159575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:12.819 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.819 [2024-04-26 14:16:54.200456] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.819 [2024-04-26 14:16:54.231253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:12.819 [2024-04-26 14:16:54.265693] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.819 [2024-04-26 14:16:54.294240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:12.819 [2024-04-26 14:16:54.359108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:12:13.078 Running I/O for 1 seconds... 00:12:13.078 Running I/O for 1 seconds... 00:12:13.078 Running I/O for 1 seconds... 00:12:13.078 Running I/O for 1 seconds... 00:12:14.014 00:12:14.014 Latency(us) 00:12:14.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.014 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:14.014 Nvme1n1 : 1.01 7358.99 28.75 0.00 0.00 17310.45 8738.13 29903.83 00:12:14.014 =================================================================================================================== 00:12:14.014 Total : 7358.99 28.75 0.00 0.00 17310.45 8738.13 29903.83 00:12:14.014 00:12:14.014 Latency(us) 00:12:14.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.015 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:14.015 Nvme1n1 : 1.02 5310.87 20.75 0.00 0.00 23901.93 11990.66 41943.04 00:12:14.015 =================================================================================================================== 00:12:14.015 Total : 5310.87 20.75 0.00 0.00 23901.93 11990.66 41943.04 00:12:14.273 00:12:14.273 Latency(us) 00:12:14.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.273 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:14.273 Nvme1n1 : 1.00 164504.01 642.59 0.00 0.00 775.15 295.82 946.63 00:12:14.273 =================================================================================================================== 00:12:14.273 Total : 164504.01 642.59 0.00 0.00 775.15 295.82 946.63 00:12:14.273 00:12:14.273 Latency(us) 00:12:14.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.273 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:14.273 Nvme1n1 : 1.01 5499.57 21.48 0.00 0.00 23174.53 7670.14 52817.16 00:12:14.273 =================================================================================================================== 00:12:14.273 Total : 5499.57 21.48 0.00 0.00 23174.53 7670.14 52817.16 00:12:14.273 14:16:55 -- target/bdev_io_wait.sh@38 -- # wait 3131402 00:12:14.273 14:16:55 -- target/bdev_io_wait.sh@39 -- # wait 3131404 00:12:14.532 14:16:55 -- target/bdev_io_wait.sh@40 -- # wait 3131407 00:12:14.532 14:16:55 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.532 14:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:14.532 14:16:55 -- common/autotest_common.sh@10 -- # set +x 00:12:14.532 14:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:14.532 14:16:55 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:14.532 14:16:55 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:14.532 14:16:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:14.532 14:16:55 -- nvmf/common.sh@117 -- # sync 00:12:14.532 14:16:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:14.532 14:16:55 -- nvmf/common.sh@120 -- # set +e 00:12:14.532 14:16:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:14.532 14:16:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:14.532 rmmod nvme_tcp 00:12:14.532 rmmod nvme_fabrics 00:12:14.532 rmmod nvme_keyring 00:12:14.532 14:16:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:14.532 14:16:55 -- nvmf/common.sh@124 -- # set -e 00:12:14.532 14:16:55 -- nvmf/common.sh@125 -- # return 0 00:12:14.532 14:16:55 -- nvmf/common.sh@478 -- # '[' -n 3131347 ']' 00:12:14.532 14:16:55 -- nvmf/common.sh@479 -- # killprocess 3131347 00:12:14.532 14:16:55 -- common/autotest_common.sh@936 -- # '[' -z 3131347 ']' 00:12:14.532 14:16:55 -- common/autotest_common.sh@940 -- # kill -0 3131347 00:12:14.532 14:16:55 -- common/autotest_common.sh@941 -- # uname 00:12:14.532 14:16:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:14.532 14:16:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3131347 00:12:14.532 14:16:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:14.532 14:16:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:14.532 14:16:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3131347' 00:12:14.532 killing process with pid 3131347 00:12:14.532 14:16:55 -- common/autotest_common.sh@955 -- # kill 3131347 00:12:14.532 14:16:55 -- common/autotest_common.sh@960 -- # wait 3131347 00:12:14.792 14:16:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:14.792 14:16:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:14.792 14:16:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:14.792 14:16:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:14.792 14:16:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:14.792 14:16:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.792 14:16:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.792 14:16:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.702 14:16:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:16.702 00:12:16.702 real 0m6.641s 00:12:16.702 user 0m15.792s 00:12:16.702 sys 0m3.089s 00:12:16.702 14:16:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:16.702 14:16:58 -- common/autotest_common.sh@10 -- # set +x 00:12:16.702 ************************************ 00:12:16.702 END TEST nvmf_bdev_io_wait 00:12:16.702 ************************************ 00:12:16.702 14:16:58 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:16.702 14:16:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:16.702 14:16:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:16.702 14:16:58 -- common/autotest_common.sh@10 -- # set +x 00:12:16.962 ************************************ 00:12:16.962 START TEST nvmf_queue_depth 00:12:16.962 ************************************ 00:12:16.962 14:16:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:16.962 * Looking for test storage... 00:12:16.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:16.962 14:16:58 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.962 14:16:58 -- nvmf/common.sh@7 -- # uname -s 00:12:16.962 14:16:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.962 14:16:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.962 14:16:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.962 14:16:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.962 14:16:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.962 14:16:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.962 14:16:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.962 14:16:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.962 14:16:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.962 14:16:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.962 14:16:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:16.962 14:16:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:16.962 14:16:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.962 14:16:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.962 14:16:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.962 14:16:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.962 14:16:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:16.962 14:16:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.962 14:16:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.962 14:16:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.962 14:16:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.962 14:16:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.962 14:16:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.962 14:16:58 -- paths/export.sh@5 -- # export PATH 00:12:16.962 14:16:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.962 14:16:58 -- nvmf/common.sh@47 -- # : 0 00:12:16.962 14:16:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:16.962 14:16:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:16.962 14:16:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.962 14:16:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.962 14:16:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.962 14:16:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:16.962 14:16:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:16.962 14:16:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:16.962 14:16:58 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:16.962 14:16:58 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:16.962 14:16:58 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:16.962 14:16:58 -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:16.962 14:16:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:16.962 14:16:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.962 14:16:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:16.962 14:16:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:16.962 14:16:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:16.962 14:16:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.962 14:16:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.962 14:16:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.962 14:16:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:16.962 14:16:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:16.962 14:16:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:16.963 14:16:58 -- common/autotest_common.sh@10 -- # set +x 00:12:18.871 14:17:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:18.871 14:17:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:18.871 14:17:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:18.871 14:17:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:18.871 14:17:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:18.871 14:17:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:18.871 14:17:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:18.871 14:17:00 -- nvmf/common.sh@295 -- # net_devs=() 00:12:18.871 14:17:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:18.871 14:17:00 -- nvmf/common.sh@296 -- # e810=() 00:12:18.871 14:17:00 -- nvmf/common.sh@296 -- # local -ga e810 00:12:18.871 14:17:00 -- nvmf/common.sh@297 -- # x722=() 00:12:18.871 14:17:00 -- nvmf/common.sh@297 -- # local -ga x722 00:12:18.871 14:17:00 -- nvmf/common.sh@298 -- # mlx=() 00:12:18.871 14:17:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:18.871 14:17:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.871 14:17:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.871 14:17:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.871 14:17:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.871 14:17:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.871 14:17:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.871 14:17:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.871 14:17:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.871 14:17:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.871 14:17:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.871 14:17:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.871 14:17:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:18.871 14:17:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:18.871 14:17:00 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:18.871 14:17:00 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:18.871 14:17:00 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:18.871 14:17:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:18.871 14:17:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:18.871 14:17:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:12:18.871 Found 0000:08:00.0 (0x8086 - 0x159b) 00:12:18.871 14:17:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:18.871 14:17:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:18.871 14:17:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.871 14:17:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.871 14:17:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:18.871 14:17:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:18.871 14:17:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:12:18.871 Found 0000:08:00.1 (0x8086 - 0x159b) 00:12:18.871 14:17:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:18.871 14:17:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:18.871 14:17:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.871 14:17:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.871 14:17:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:18.871 14:17:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:18.871 14:17:00 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:18.871 14:17:00 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:18.871 14:17:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:18.871 14:17:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.871 14:17:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:18.871 14:17:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.871 14:17:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:12:18.871 Found net devices under 0000:08:00.0: cvl_0_0 00:12:18.871 14:17:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.871 14:17:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:18.871 14:17:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.871 14:17:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:18.871 14:17:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.871 14:17:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:12:18.871 Found net devices under 0000:08:00.1: cvl_0_1 00:12:18.871 14:17:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.871 14:17:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:18.871 14:17:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:18.871 14:17:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:18.871 14:17:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:18.871 14:17:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:18.871 14:17:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.871 14:17:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.871 14:17:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:18.871 14:17:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:18.871 14:17:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:18.871 14:17:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:18.871 14:17:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:18.871 14:17:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:18.871 14:17:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.871 14:17:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:18.871 14:17:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:18.871 14:17:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:18.871 14:17:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:18.871 14:17:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:18.871 14:17:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:18.871 14:17:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:18.871 14:17:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:18.871 14:17:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:18.871 14:17:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:18.871 14:17:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:18.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:12:18.871 00:12:18.871 --- 10.0.0.2 ping statistics --- 00:12:18.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.871 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:12:18.871 14:17:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:18.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:12:18.872 00:12:18.872 --- 10.0.0.1 ping statistics --- 00:12:18.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.872 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:12:18.872 14:17:00 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.872 14:17:00 -- nvmf/common.sh@411 -- # return 0 00:12:18.872 14:17:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:18.872 14:17:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.872 14:17:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:18.872 14:17:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:18.872 14:17:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.872 14:17:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:18.872 14:17:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:18.872 14:17:00 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:18.872 14:17:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:18.872 14:17:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:18.872 14:17:00 -- common/autotest_common.sh@10 -- # set +x 00:12:18.872 14:17:00 -- nvmf/common.sh@470 -- # nvmfpid=3133129 00:12:18.872 14:17:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:18.872 14:17:00 -- nvmf/common.sh@471 -- # waitforlisten 3133129 00:12:18.872 14:17:00 -- common/autotest_common.sh@817 -- # '[' -z 3133129 ']' 00:12:18.872 14:17:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.872 14:17:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:18.872 14:17:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.872 14:17:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:18.872 14:17:00 -- common/autotest_common.sh@10 -- # set +x 00:12:18.872 [2024-04-26 14:17:00.222772] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:12:18.872 [2024-04-26 14:17:00.222860] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.872 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.872 [2024-04-26 14:17:00.287023] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.872 [2024-04-26 14:17:00.401182] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.872 [2024-04-26 14:17:00.401245] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.872 [2024-04-26 14:17:00.401262] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.872 [2024-04-26 14:17:00.401275] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.872 [2024-04-26 14:17:00.401288] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.872 [2024-04-26 14:17:00.401327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.132 14:17:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:19.132 14:17:00 -- common/autotest_common.sh@850 -- # return 0 00:12:19.132 14:17:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:19.132 14:17:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:19.132 14:17:00 -- common/autotest_common.sh@10 -- # set +x 00:12:19.132 14:17:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.132 14:17:00 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:19.132 14:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:19.132 14:17:00 -- common/autotest_common.sh@10 -- # set +x 00:12:19.132 [2024-04-26 14:17:00.537719] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.132 14:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:19.132 14:17:00 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:19.132 14:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:19.132 14:17:00 -- common/autotest_common.sh@10 -- # set +x 00:12:19.132 Malloc0 00:12:19.132 14:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:19.132 14:17:00 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:19.132 14:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:19.132 14:17:00 -- common/autotest_common.sh@10 -- # set +x 00:12:19.132 14:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:19.132 14:17:00 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:19.132 14:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:19.132 14:17:00 -- common/autotest_common.sh@10 -- # set +x 00:12:19.132 14:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:19.132 14:17:00 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.132 14:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:19.132 14:17:00 -- common/autotest_common.sh@10 -- # set +x 00:12:19.132 [2024-04-26 14:17:00.600739] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.132 14:17:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:19.132 14:17:00 -- target/queue_depth.sh@30 -- # bdevperf_pid=3133148 00:12:19.132 14:17:00 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:19.132 14:17:00 -- target/queue_depth.sh@33 -- # waitforlisten 3133148 /var/tmp/bdevperf.sock 00:12:19.132 14:17:00 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:19.132 14:17:00 -- common/autotest_common.sh@817 -- # '[' -z 3133148 ']' 00:12:19.132 14:17:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:19.132 14:17:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:19.132 14:17:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:19.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:19.132 14:17:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:19.132 14:17:00 -- common/autotest_common.sh@10 -- # set +x 00:12:19.132 [2024-04-26 14:17:00.650728] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:12:19.132 [2024-04-26 14:17:00.650830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3133148 ] 00:12:19.132 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.391 [2024-04-26 14:17:00.710492] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.391 [2024-04-26 14:17:00.825845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.391 14:17:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:19.391 14:17:00 -- common/autotest_common.sh@850 -- # return 0 00:12:19.391 14:17:00 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:19.391 14:17:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:19.391 14:17:00 -- common/autotest_common.sh@10 -- # set +x 00:12:19.649 NVMe0n1 00:12:19.649 14:17:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:19.649 14:17:01 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:19.909 Running I/O for 10 seconds... 00:12:29.894 00:12:29.894 Latency(us) 00:12:29.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.894 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:29.894 Verification LBA range: start 0x0 length 0x4000 00:12:29.894 NVMe0n1 : 10.09 7635.78 29.83 0.00 0.00 133336.10 19903.53 82721.00 00:12:29.894 =================================================================================================================== 00:12:29.894 Total : 7635.78 29.83 0.00 0.00 133336.10 19903.53 82721.00 00:12:29.894 0 00:12:29.894 14:17:11 -- target/queue_depth.sh@39 -- # killprocess 3133148 00:12:29.894 14:17:11 -- common/autotest_common.sh@936 -- # '[' -z 3133148 ']' 00:12:29.894 14:17:11 -- common/autotest_common.sh@940 -- # kill -0 3133148 00:12:29.894 14:17:11 -- common/autotest_common.sh@941 -- # uname 00:12:29.894 14:17:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:29.894 14:17:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3133148 00:12:29.894 14:17:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:29.894 14:17:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:29.894 14:17:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3133148' 00:12:29.894 killing process with pid 3133148 00:12:29.894 14:17:11 -- common/autotest_common.sh@955 -- # kill 3133148 00:12:29.894 Received shutdown signal, test time was about 10.000000 seconds 00:12:29.894 00:12:29.894 Latency(us) 00:12:29.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.894 =================================================================================================================== 00:12:29.894 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:29.894 14:17:11 -- common/autotest_common.sh@960 -- # wait 3133148 00:12:30.153 14:17:11 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:30.153 14:17:11 -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:30.153 14:17:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:30.153 14:17:11 -- nvmf/common.sh@117 -- # sync 00:12:30.153 14:17:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:30.153 14:17:11 -- nvmf/common.sh@120 -- # set +e 00:12:30.153 14:17:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:30.153 14:17:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:30.153 rmmod nvme_tcp 00:12:30.153 rmmod nvme_fabrics 00:12:30.153 rmmod nvme_keyring 00:12:30.153 14:17:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:30.153 14:17:11 -- nvmf/common.sh@124 -- # set -e 00:12:30.153 14:17:11 -- nvmf/common.sh@125 -- # return 0 00:12:30.153 14:17:11 -- nvmf/common.sh@478 -- # '[' -n 3133129 ']' 00:12:30.153 14:17:11 -- nvmf/common.sh@479 -- # killprocess 3133129 00:12:30.153 14:17:11 -- common/autotest_common.sh@936 -- # '[' -z 3133129 ']' 00:12:30.154 14:17:11 -- common/autotest_common.sh@940 -- # kill -0 3133129 00:12:30.154 14:17:11 -- common/autotest_common.sh@941 -- # uname 00:12:30.154 14:17:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.154 14:17:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3133129 00:12:30.154 14:17:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:30.154 14:17:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:30.154 14:17:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3133129' 00:12:30.154 killing process with pid 3133129 00:12:30.154 14:17:11 -- common/autotest_common.sh@955 -- # kill 3133129 00:12:30.154 14:17:11 -- common/autotest_common.sh@960 -- # wait 3133129 00:12:30.412 14:17:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:30.412 14:17:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:30.412 14:17:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:30.412 14:17:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.412 14:17:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:30.412 14:17:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.412 14:17:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.412 14:17:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.950 14:17:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:32.950 00:12:32.950 real 0m15.630s 00:12:32.950 user 0m22.500s 00:12:32.950 sys 0m2.701s 00:12:32.950 14:17:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:32.950 14:17:13 -- common/autotest_common.sh@10 -- # set +x 00:12:32.950 ************************************ 00:12:32.950 END TEST nvmf_queue_depth 00:12:32.950 ************************************ 00:12:32.950 14:17:14 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:32.950 14:17:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:32.950 14:17:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:32.950 14:17:14 -- common/autotest_common.sh@10 -- # set +x 00:12:32.950 ************************************ 00:12:32.950 START TEST nvmf_multipath 00:12:32.950 ************************************ 00:12:32.950 14:17:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:32.950 * Looking for test storage... 00:12:32.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.950 14:17:14 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.950 14:17:14 -- nvmf/common.sh@7 -- # uname -s 00:12:32.950 14:17:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.950 14:17:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.950 14:17:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.950 14:17:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.950 14:17:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.950 14:17:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.950 14:17:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.950 14:17:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.950 14:17:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.950 14:17:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.950 14:17:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:32.950 14:17:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:32.950 14:17:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.950 14:17:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.950 14:17:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.950 14:17:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.950 14:17:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.950 14:17:14 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.950 14:17:14 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.950 14:17:14 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.950 14:17:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.950 14:17:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.950 14:17:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.950 14:17:14 -- paths/export.sh@5 -- # export PATH 00:12:32.950 14:17:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.950 14:17:14 -- nvmf/common.sh@47 -- # : 0 00:12:32.950 14:17:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:32.950 14:17:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:32.950 14:17:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.950 14:17:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.950 14:17:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.950 14:17:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:32.950 14:17:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:32.950 14:17:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:32.950 14:17:14 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:32.950 14:17:14 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:32.950 14:17:14 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:32.950 14:17:14 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:32.950 14:17:14 -- target/multipath.sh@43 -- # nvmftestinit 00:12:32.950 14:17:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:32.950 14:17:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.950 14:17:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:32.950 14:17:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:32.950 14:17:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:32.950 14:17:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.950 14:17:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.950 14:17:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.950 14:17:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:32.950 14:17:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:32.950 14:17:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:32.950 14:17:14 -- common/autotest_common.sh@10 -- # set +x 00:12:34.325 14:17:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:34.325 14:17:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:34.325 14:17:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:34.325 14:17:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:34.325 14:17:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:34.325 14:17:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:34.325 14:17:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:34.325 14:17:15 -- nvmf/common.sh@295 -- # net_devs=() 00:12:34.325 14:17:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:34.325 14:17:15 -- nvmf/common.sh@296 -- # e810=() 00:12:34.325 14:17:15 -- nvmf/common.sh@296 -- # local -ga e810 00:12:34.325 14:17:15 -- nvmf/common.sh@297 -- # x722=() 00:12:34.325 14:17:15 -- nvmf/common.sh@297 -- # local -ga x722 00:12:34.325 14:17:15 -- nvmf/common.sh@298 -- # mlx=() 00:12:34.326 14:17:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:34.326 14:17:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:34.326 14:17:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:34.326 14:17:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:34.326 14:17:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:34.326 14:17:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:34.326 14:17:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:34.326 14:17:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:34.326 14:17:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:34.326 14:17:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:34.326 14:17:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:34.326 14:17:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:34.326 14:17:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:34.326 14:17:15 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:34.326 14:17:15 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:34.326 14:17:15 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:34.326 14:17:15 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:34.326 14:17:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:34.326 14:17:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.326 14:17:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:12:34.326 Found 0000:08:00.0 (0x8086 - 0x159b) 00:12:34.326 14:17:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.326 14:17:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.326 14:17:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.326 14:17:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.326 14:17:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:34.326 14:17:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.326 14:17:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:12:34.326 Found 0000:08:00.1 (0x8086 - 0x159b) 00:12:34.326 14:17:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.326 14:17:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.326 14:17:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.326 14:17:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.326 14:17:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:34.326 14:17:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:34.326 14:17:15 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:34.326 14:17:15 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:34.326 14:17:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.326 14:17:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.326 14:17:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:34.326 14:17:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.326 14:17:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:12:34.326 Found net devices under 0000:08:00.0: cvl_0_0 00:12:34.326 14:17:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.326 14:17:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.326 14:17:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.326 14:17:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:34.326 14:17:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.326 14:17:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:12:34.326 Found net devices under 0000:08:00.1: cvl_0_1 00:12:34.326 14:17:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.326 14:17:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:34.326 14:17:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:34.326 14:17:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:34.326 14:17:15 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:34.326 14:17:15 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:34.326 14:17:15 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.326 14:17:15 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.326 14:17:15 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:34.326 14:17:15 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:34.326 14:17:15 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:34.326 14:17:15 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:34.326 14:17:15 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:34.326 14:17:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:34.326 14:17:15 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.326 14:17:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:34.326 14:17:15 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:34.326 14:17:15 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:34.326 14:17:15 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:34.585 14:17:15 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:34.585 14:17:15 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:34.585 14:17:15 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:34.585 14:17:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.585 14:17:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.585 14:17:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.585 14:17:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:34.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:12:34.585 00:12:34.585 --- 10.0.0.2 ping statistics --- 00:12:34.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.585 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:12:34.585 14:17:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:12:34.585 00:12:34.585 --- 10.0.0.1 ping statistics --- 00:12:34.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.585 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:12:34.585 14:17:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.585 14:17:15 -- nvmf/common.sh@411 -- # return 0 00:12:34.585 14:17:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:34.585 14:17:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.585 14:17:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:34.585 14:17:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:34.585 14:17:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.585 14:17:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:34.585 14:17:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:34.585 14:17:16 -- target/multipath.sh@45 -- # '[' -z ']' 00:12:34.585 14:17:16 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:34.585 only one NIC for nvmf test 00:12:34.585 14:17:16 -- target/multipath.sh@47 -- # nvmftestfini 00:12:34.585 14:17:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:34.585 14:17:16 -- nvmf/common.sh@117 -- # sync 00:12:34.585 14:17:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:34.585 14:17:16 -- nvmf/common.sh@120 -- # set +e 00:12:34.585 14:17:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:34.585 14:17:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:34.585 rmmod nvme_tcp 00:12:34.585 rmmod nvme_fabrics 00:12:34.585 rmmod nvme_keyring 00:12:34.585 14:17:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:34.585 14:17:16 -- nvmf/common.sh@124 -- # set -e 00:12:34.585 14:17:16 -- nvmf/common.sh@125 -- # return 0 00:12:34.585 14:17:16 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:12:34.585 14:17:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:34.585 14:17:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:34.585 14:17:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:34.585 14:17:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:34.585 14:17:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:34.585 14:17:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.585 14:17:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.585 14:17:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.120 14:17:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:37.120 14:17:18 -- target/multipath.sh@48 -- # exit 0 00:12:37.120 14:17:18 -- target/multipath.sh@1 -- # nvmftestfini 00:12:37.120 14:17:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:37.120 14:17:18 -- nvmf/common.sh@117 -- # sync 00:12:37.120 14:17:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:37.120 14:17:18 -- nvmf/common.sh@120 -- # set +e 00:12:37.120 14:17:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:37.120 14:17:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:37.120 14:17:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:37.120 14:17:18 -- nvmf/common.sh@124 -- # set -e 00:12:37.120 14:17:18 -- nvmf/common.sh@125 -- # return 0 00:12:37.120 14:17:18 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:12:37.120 14:17:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:37.120 14:17:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:37.120 14:17:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:37.120 14:17:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:37.120 14:17:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:37.120 14:17:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.120 14:17:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.120 14:17:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.120 14:17:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:37.120 00:12:37.120 real 0m3.986s 00:12:37.120 user 0m0.687s 00:12:37.120 sys 0m1.280s 00:12:37.120 14:17:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:37.120 14:17:18 -- common/autotest_common.sh@10 -- # set +x 00:12:37.120 ************************************ 00:12:37.120 END TEST nvmf_multipath 00:12:37.120 ************************************ 00:12:37.120 14:17:18 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:37.120 14:17:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:37.120 14:17:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:37.120 14:17:18 -- common/autotest_common.sh@10 -- # set +x 00:12:37.120 ************************************ 00:12:37.120 START TEST nvmf_zcopy 00:12:37.120 ************************************ 00:12:37.120 14:17:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:37.120 * Looking for test storage... 00:12:37.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.120 14:17:18 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.120 14:17:18 -- nvmf/common.sh@7 -- # uname -s 00:12:37.120 14:17:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.120 14:17:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.120 14:17:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.120 14:17:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.120 14:17:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.120 14:17:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.120 14:17:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.120 14:17:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.120 14:17:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.121 14:17:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.121 14:17:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:37.121 14:17:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:37.121 14:17:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.121 14:17:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.121 14:17:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.121 14:17:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.121 14:17:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.121 14:17:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.121 14:17:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.121 14:17:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.121 14:17:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.121 14:17:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.121 14:17:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.121 14:17:18 -- paths/export.sh@5 -- # export PATH 00:12:37.121 14:17:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.121 14:17:18 -- nvmf/common.sh@47 -- # : 0 00:12:37.121 14:17:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:37.121 14:17:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:37.121 14:17:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.121 14:17:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.121 14:17:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.121 14:17:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:37.121 14:17:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:37.121 14:17:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:37.121 14:17:18 -- target/zcopy.sh@12 -- # nvmftestinit 00:12:37.121 14:17:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:37.121 14:17:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.121 14:17:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:37.121 14:17:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:37.121 14:17:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:37.121 14:17:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.121 14:17:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.121 14:17:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.121 14:17:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:37.121 14:17:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:37.121 14:17:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:37.121 14:17:18 -- common/autotest_common.sh@10 -- # set +x 00:12:38.499 14:17:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:38.499 14:17:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:38.499 14:17:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:38.499 14:17:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:38.499 14:17:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:38.499 14:17:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:38.499 14:17:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:38.499 14:17:19 -- nvmf/common.sh@295 -- # net_devs=() 00:12:38.499 14:17:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:38.499 14:17:19 -- nvmf/common.sh@296 -- # e810=() 00:12:38.499 14:17:19 -- nvmf/common.sh@296 -- # local -ga e810 00:12:38.499 14:17:19 -- nvmf/common.sh@297 -- # x722=() 00:12:38.499 14:17:19 -- nvmf/common.sh@297 -- # local -ga x722 00:12:38.499 14:17:19 -- nvmf/common.sh@298 -- # mlx=() 00:12:38.499 14:17:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:38.499 14:17:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.499 14:17:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.499 14:17:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.499 14:17:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.499 14:17:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.499 14:17:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.499 14:17:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.499 14:17:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.499 14:17:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.499 14:17:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.499 14:17:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.499 14:17:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:38.500 14:17:19 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:38.500 14:17:19 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:38.500 14:17:19 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:38.500 14:17:19 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:38.500 14:17:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:38.500 14:17:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.500 14:17:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:12:38.500 Found 0000:08:00.0 (0x8086 - 0x159b) 00:12:38.500 14:17:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.500 14:17:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.500 14:17:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.500 14:17:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.500 14:17:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.500 14:17:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.500 14:17:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:12:38.500 Found 0000:08:00.1 (0x8086 - 0x159b) 00:12:38.500 14:17:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.500 14:17:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.500 14:17:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.500 14:17:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.500 14:17:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.500 14:17:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:38.500 14:17:19 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:38.500 14:17:19 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:38.500 14:17:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.500 14:17:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.500 14:17:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:38.500 14:17:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.500 14:17:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:12:38.500 Found net devices under 0000:08:00.0: cvl_0_0 00:12:38.500 14:17:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.500 14:17:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.500 14:17:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.500 14:17:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:38.500 14:17:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.500 14:17:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:12:38.500 Found net devices under 0000:08:00.1: cvl_0_1 00:12:38.500 14:17:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.500 14:17:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:38.500 14:17:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:38.500 14:17:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:38.500 14:17:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:38.500 14:17:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:38.500 14:17:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.500 14:17:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.500 14:17:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.500 14:17:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:38.500 14:17:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.500 14:17:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.500 14:17:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:38.500 14:17:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.500 14:17:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.500 14:17:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:38.500 14:17:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:38.500 14:17:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.500 14:17:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.500 14:17:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.500 14:17:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.500 14:17:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:38.500 14:17:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.815 14:17:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.815 14:17:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.815 14:17:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:38.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:12:38.815 00:12:38.815 --- 10.0.0.2 ping statistics --- 00:12:38.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.815 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:12:38.815 14:17:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:12:38.815 00:12:38.815 --- 10.0.0.1 ping statistics --- 00:12:38.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.815 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:12:38.815 14:17:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.815 14:17:20 -- nvmf/common.sh@411 -- # return 0 00:12:38.815 14:17:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:38.815 14:17:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.815 14:17:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:38.815 14:17:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:38.815 14:17:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.815 14:17:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:38.815 14:17:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:38.815 14:17:20 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:38.815 14:17:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:38.815 14:17:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:38.815 14:17:20 -- common/autotest_common.sh@10 -- # set +x 00:12:38.815 14:17:20 -- nvmf/common.sh@470 -- # nvmfpid=3137152 00:12:38.815 14:17:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:38.815 14:17:20 -- nvmf/common.sh@471 -- # waitforlisten 3137152 00:12:38.815 14:17:20 -- common/autotest_common.sh@817 -- # '[' -z 3137152 ']' 00:12:38.815 14:17:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.815 14:17:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:38.815 14:17:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.815 14:17:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:38.815 14:17:20 -- common/autotest_common.sh@10 -- # set +x 00:12:38.815 [2024-04-26 14:17:20.169944] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:12:38.815 [2024-04-26 14:17:20.170029] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.815 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.815 [2024-04-26 14:17:20.236996] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.119 [2024-04-26 14:17:20.351962] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.119 [2024-04-26 14:17:20.352017] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.119 [2024-04-26 14:17:20.352041] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.119 [2024-04-26 14:17:20.352062] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.119 [2024-04-26 14:17:20.352081] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.119 [2024-04-26 14:17:20.352122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.119 14:17:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:39.119 14:17:20 -- common/autotest_common.sh@850 -- # return 0 00:12:39.119 14:17:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:39.119 14:17:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:39.119 14:17:20 -- common/autotest_common.sh@10 -- # set +x 00:12:39.119 14:17:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.119 14:17:20 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:39.119 14:17:20 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:39.119 14:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.119 14:17:20 -- common/autotest_common.sh@10 -- # set +x 00:12:39.119 [2024-04-26 14:17:20.492990] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.119 14:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.119 14:17:20 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:39.119 14:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.119 14:17:20 -- common/autotest_common.sh@10 -- # set +x 00:12:39.119 14:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.119 14:17:20 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.119 14:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.119 14:17:20 -- common/autotest_common.sh@10 -- # set +x 00:12:39.119 [2024-04-26 14:17:20.509133] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.119 14:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.119 14:17:20 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:39.119 14:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.119 14:17:20 -- common/autotest_common.sh@10 -- # set +x 00:12:39.119 14:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.119 14:17:20 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:39.119 14:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.119 14:17:20 -- common/autotest_common.sh@10 -- # set +x 00:12:39.119 malloc0 00:12:39.119 14:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.119 14:17:20 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:39.119 14:17:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.119 14:17:20 -- common/autotest_common.sh@10 -- # set +x 00:12:39.119 14:17:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.119 14:17:20 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:39.119 14:17:20 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:39.119 14:17:20 -- nvmf/common.sh@521 -- # config=() 00:12:39.119 14:17:20 -- nvmf/common.sh@521 -- # local subsystem config 00:12:39.119 14:17:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:39.119 14:17:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:39.119 { 00:12:39.119 "params": { 00:12:39.119 "name": "Nvme$subsystem", 00:12:39.119 "trtype": "$TEST_TRANSPORT", 00:12:39.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:39.119 "adrfam": "ipv4", 00:12:39.119 "trsvcid": "$NVMF_PORT", 00:12:39.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:39.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:39.119 "hdgst": ${hdgst:-false}, 00:12:39.119 "ddgst": ${ddgst:-false} 00:12:39.119 }, 00:12:39.119 "method": "bdev_nvme_attach_controller" 00:12:39.119 } 00:12:39.119 EOF 00:12:39.119 )") 00:12:39.119 14:17:20 -- nvmf/common.sh@543 -- # cat 00:12:39.119 14:17:20 -- nvmf/common.sh@545 -- # jq . 00:12:39.119 14:17:20 -- nvmf/common.sh@546 -- # IFS=, 00:12:39.119 14:17:20 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:39.119 "params": { 00:12:39.119 "name": "Nvme1", 00:12:39.119 "trtype": "tcp", 00:12:39.119 "traddr": "10.0.0.2", 00:12:39.119 "adrfam": "ipv4", 00:12:39.119 "trsvcid": "4420", 00:12:39.119 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:39.119 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:39.119 "hdgst": false, 00:12:39.119 "ddgst": false 00:12:39.119 }, 00:12:39.119 "method": "bdev_nvme_attach_controller" 00:12:39.119 }' 00:12:39.119 [2024-04-26 14:17:20.588382] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:12:39.119 [2024-04-26 14:17:20.588469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137175 ] 00:12:39.119 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.119 [2024-04-26 14:17:20.649071] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.383 [2024-04-26 14:17:20.767434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.654 Running I/O for 10 seconds... 00:12:49.644 00:12:49.644 Latency(us) 00:12:49.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.644 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:49.644 Verification LBA range: start 0x0 length 0x1000 00:12:49.644 Nvme1n1 : 10.02 5248.23 41.00 0.00 0.00 24318.02 3835.07 34564.17 00:12:49.644 =================================================================================================================== 00:12:49.644 Total : 5248.23 41.00 0.00 0.00 24318.02 3835.07 34564.17 00:12:49.903 14:17:31 -- target/zcopy.sh@39 -- # perfpid=3138171 00:12:49.903 14:17:31 -- target/zcopy.sh@41 -- # xtrace_disable 00:12:49.903 14:17:31 -- common/autotest_common.sh@10 -- # set +x 00:12:49.903 14:17:31 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:49.903 14:17:31 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:49.903 14:17:31 -- nvmf/common.sh@521 -- # config=() 00:12:49.903 14:17:31 -- nvmf/common.sh@521 -- # local subsystem config 00:12:49.903 14:17:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:49.903 14:17:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:49.903 { 00:12:49.903 "params": { 00:12:49.903 "name": "Nvme$subsystem", 00:12:49.903 "trtype": "$TEST_TRANSPORT", 00:12:49.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:49.903 "adrfam": "ipv4", 00:12:49.903 "trsvcid": "$NVMF_PORT", 00:12:49.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:49.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:49.903 "hdgst": ${hdgst:-false}, 00:12:49.903 "ddgst": ${ddgst:-false} 00:12:49.903 }, 00:12:49.903 "method": "bdev_nvme_attach_controller" 00:12:49.903 } 00:12:49.903 EOF 00:12:49.903 )") 00:12:49.903 14:17:31 -- nvmf/common.sh@543 -- # cat 00:12:49.903 [2024-04-26 14:17:31.241187] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.241229] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 14:17:31 -- nvmf/common.sh@545 -- # jq . 00:12:49.903 14:17:31 -- nvmf/common.sh@546 -- # IFS=, 00:12:49.903 14:17:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:49.903 "params": { 00:12:49.903 "name": "Nvme1", 00:12:49.903 "trtype": "tcp", 00:12:49.903 "traddr": "10.0.0.2", 00:12:49.903 "adrfam": "ipv4", 00:12:49.903 "trsvcid": "4420", 00:12:49.903 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:49.903 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:49.903 "hdgst": false, 00:12:49.903 "ddgst": false 00:12:49.903 }, 00:12:49.903 "method": "bdev_nvme_attach_controller" 00:12:49.903 }' 00:12:49.903 [2024-04-26 14:17:31.249149] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.249178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.257171] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.257198] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.265193] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.265219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.273212] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.273237] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.281235] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.281260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.281343] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:12:49.903 [2024-04-26 14:17:31.281430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3138171 ] 00:12:49.903 [2024-04-26 14:17:31.289262] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.289297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.297279] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.297305] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.305301] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.305326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.903 [2024-04-26 14:17:31.313323] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.313349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.321344] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.321369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.329366] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.329391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.337404] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.337428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.341514] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.903 [2024-04-26 14:17:31.345468] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.345512] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.353511] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.353560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.361455] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.361481] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.369497] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.369528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.377506] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.377533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.385531] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.385560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.393549] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.393575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.401597] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.401640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.409683] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.409742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.417664] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.417709] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.425645] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.425670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.433699] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.433747] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.441696] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.441723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.449728] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.449764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.457728] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.457760] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.903 [2024-04-26 14:17:31.459368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.903 [2024-04-26 14:17:31.465755] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.903 [2024-04-26 14:17:31.465781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.473844] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.473891] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.481859] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.481910] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.489890] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.489940] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.497912] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.497961] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.505935] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.505984] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.513950] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.514000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.521939] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.521978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.529997] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.530045] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.538023] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.538071] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.545966] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.545991] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.553999] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.554026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.562019] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.562046] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.570053] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.570081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.578067] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.578095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.586091] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.586118] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.594110] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.594137] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.602139] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.602165] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.610158] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.610183] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.618183] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.618208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.626213] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.626238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.634235] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.634262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.642264] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.642291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.650305] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.650332] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.658311] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.658337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.666338] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.666367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 [2024-04-26 14:17:31.674353] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.161 [2024-04-26 14:17:31.674380] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.161 Running I/O for 5 seconds... 00:12:50.162 [2024-04-26 14:17:31.686128] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.162 [2024-04-26 14:17:31.686157] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.162 [2024-04-26 14:17:31.697121] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.162 [2024-04-26 14:17:31.697153] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.162 [2024-04-26 14:17:31.710574] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.162 [2024-04-26 14:17:31.710604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.162 [2024-04-26 14:17:31.722980] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.162 [2024-04-26 14:17:31.723011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.736294] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.736325] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.748643] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.748673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.761228] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.761266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.773752] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.773781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.786565] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.786594] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.798977] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.799007] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.811360] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.811390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.823775] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.823804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.835988] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.836025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.848469] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.848498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.860869] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.860898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.873314] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.873343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.886737] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.886767] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.899738] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.899768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.912270] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.912299] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.924719] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.924756] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.937145] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.937175] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.949810] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.949839] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.962367] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.962396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.974746] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.974777] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.419 [2024-04-26 14:17:31.987347] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.419 [2024-04-26 14:17:31.987377] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.677 [2024-04-26 14:17:31.999929] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.677 [2024-04-26 14:17:31.999967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.677 [2024-04-26 14:17:32.012128] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.677 [2024-04-26 14:17:32.012158] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.677 [2024-04-26 14:17:32.024380] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.677 [2024-04-26 14:17:32.024410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.677 [2024-04-26 14:17:32.037019] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.677 [2024-04-26 14:17:32.037049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.677 [2024-04-26 14:17:32.049583] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.677 [2024-04-26 14:17:32.049621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.677 [2024-04-26 14:17:32.062474] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.677 [2024-04-26 14:17:32.062504] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.677 [2024-04-26 14:17:32.075181] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.677 [2024-04-26 14:17:32.075210] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.677 [2024-04-26 14:17:32.087568] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.677 [2024-04-26 14:17:32.087597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.677 [2024-04-26 14:17:32.099733] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.677 [2024-04-26 14:17:32.099763] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.677 [2024-04-26 14:17:32.112617] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.677 [2024-04-26 14:17:32.112656] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.677 [2024-04-26 14:17:32.125069] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.678 [2024-04-26 14:17:32.125098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.678 [2024-04-26 14:17:32.137331] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.678 [2024-04-26 14:17:32.137361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.678 [2024-04-26 14:17:32.149903] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.678 [2024-04-26 14:17:32.149932] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.678 [2024-04-26 14:17:32.162121] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.678 [2024-04-26 14:17:32.162150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.678 [2024-04-26 14:17:32.174531] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.678 [2024-04-26 14:17:32.174559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.678 [2024-04-26 14:17:32.186621] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.678 [2024-04-26 14:17:32.186658] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.678 [2024-04-26 14:17:32.198741] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.678 [2024-04-26 14:17:32.198770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.678 [2024-04-26 14:17:32.211172] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.678 [2024-04-26 14:17:32.211201] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.678 [2024-04-26 14:17:32.223936] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.678 [2024-04-26 14:17:32.223965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.678 [2024-04-26 14:17:32.236778] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.678 [2024-04-26 14:17:32.236821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.250441] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.250472] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.263491] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.263520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.275902] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.275932] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.288488] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.288517] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.301139] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.301169] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.313842] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.313871] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.326964] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.326993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.339866] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.339895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.352304] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.352333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.365041] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.365070] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.377111] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.377140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.389561] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.389590] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.402399] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.402428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.414493] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.414522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.426463] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.426492] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.438661] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.438689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.451098] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.451127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.463255] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.463284] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.475412] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.475450] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.487918] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.487947] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.936 [2024-04-26 14:17:32.500233] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.936 [2024-04-26 14:17:32.500262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.194 [2024-04-26 14:17:32.513001] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.194 [2024-04-26 14:17:32.513030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.194 [2024-04-26 14:17:32.525562] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.194 [2024-04-26 14:17:32.525592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.194 [2024-04-26 14:17:32.538402] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.195 [2024-04-26 14:17:32.538432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.195 [2024-04-26 14:17:32.550880] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.195 [2024-04-26 14:17:32.550909] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.195 [2024-04-26 14:17:32.563578] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.195 [2024-04-26 14:17:32.563608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.195 [2024-04-26 14:17:32.575888] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.195 [2024-04-26 14:17:32.575918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.195 [2024-04-26 14:17:32.588385] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.195 [2024-04-26 14:17:32.588414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.195 [2024-04-26 14:17:32.600935] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.195 [2024-04-26 14:17:32.600964] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.195 [2024-04-26 14:17:32.613684] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.195 [2024-04-26 14:17:32.613714] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.195 [2024-04-26 14:17:32.625953] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.195 [2024-04-26 14:17:32.625983] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.195 [2024-04-26 14:17:32.638419] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.195 [2024-04-26 14:17:32.638448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.195 [2024-04-26 14:17:32.651060] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.195 [2024-04-26 14:17:32.651089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.195 [2024-04-26 14:17:32.663247] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.195 [2024-04-26 14:17:32.663276] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.195 [2024-04-26 14:17:32.675540] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.195 [2024-04-26 14:17:32.675569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.195 [2024-04-26 14:17:32.687948] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.195 [2024-04-26 14:17:32.687976] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.195 [2024-04-26 14:17:32.700834] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.195 [2024-04-26 14:17:32.700863] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.195 [2024-04-26 14:17:32.713168] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.195 [2024-04-26 14:17:32.713198] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.195 [2024-04-26 14:17:32.725934] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.195 [2024-04-26 14:17:32.725964] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.195 [2024-04-26 14:17:32.738889] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.195 [2024-04-26 14:17:32.738926] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.195 [2024-04-26 14:17:32.751743] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.195 [2024-04-26 14:17:32.751772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:32.764574] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:32.764623] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:32.777406] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:32.777436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:32.789761] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:32.789790] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:32.802138] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:32.802169] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:32.815090] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:32.815119] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:32.827779] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:32.827808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:32.840614] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:32.840651] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:32.852736] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:32.852764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:32.865255] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:32.865285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:32.877925] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:32.877955] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:32.890563] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:32.890593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:32.903486] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:32.903515] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:32.916157] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:32.916186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:32.928770] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:32.928799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:32.940703] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:32.940732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:32.952955] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:32.952984] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:32.965409] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:32.965446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:32.977532] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:32.977560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:32.989951] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:32.989981] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:33.002328] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:33.002357] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.453 [2024-04-26 14:17:33.014709] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.453 [2024-04-26 14:17:33.014738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.027473] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.027503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.040361] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.040390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.053333] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.053362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.066090] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.066118] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.078067] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.078095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.090601] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.090637] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.103141] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.103171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.114892] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.114922] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.126915] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.126945] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.139456] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.139485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.151930] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.151960] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.164761] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.164790] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.177074] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.177104] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.189676] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.189706] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.202058] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.202087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.214562] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.214591] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.226839] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.226868] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.239343] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.239372] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.251836] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.251865] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.264206] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.264235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.712 [2024-04-26 14:17:33.276833] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.712 [2024-04-26 14:17:33.276862] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.290085] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.970 [2024-04-26 14:17:33.290115] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.302387] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.970 [2024-04-26 14:17:33.302416] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.314920] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.970 [2024-04-26 14:17:33.314950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.327239] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.970 [2024-04-26 14:17:33.327268] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.339892] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.970 [2024-04-26 14:17:33.339921] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.352294] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.970 [2024-04-26 14:17:33.352324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.364767] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.970 [2024-04-26 14:17:33.364796] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.377330] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.970 [2024-04-26 14:17:33.377359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.389689] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.970 [2024-04-26 14:17:33.389718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.401945] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.970 [2024-04-26 14:17:33.401975] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.414396] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.970 [2024-04-26 14:17:33.414424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.426694] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.970 [2024-04-26 14:17:33.426723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.439008] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.970 [2024-04-26 14:17:33.439037] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.451649] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.970 [2024-04-26 14:17:33.451678] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.464117] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.970 [2024-04-26 14:17:33.464146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.476221] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.970 [2024-04-26 14:17:33.476258] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.488264] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.970 [2024-04-26 14:17:33.488293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.500704] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.970 [2024-04-26 14:17:33.500733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.513197] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.970 [2024-04-26 14:17:33.513226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.525696] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.970 [2024-04-26 14:17:33.525725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:51.970 [2024-04-26 14:17:33.538369] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:51.971 [2024-04-26 14:17:33.538399] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.229 [2024-04-26 14:17:33.551179] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.229 [2024-04-26 14:17:33.551208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.229 [2024-04-26 14:17:33.563907] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.229 [2024-04-26 14:17:33.563937] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.229 [2024-04-26 14:17:33.576556] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.229 [2024-04-26 14:17:33.576586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.229 [2024-04-26 14:17:33.589072] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.229 [2024-04-26 14:17:33.589102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.229 [2024-04-26 14:17:33.601585] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.229 [2024-04-26 14:17:33.601615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.229 [2024-04-26 14:17:33.614198] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.229 [2024-04-26 14:17:33.614229] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.229 [2024-04-26 14:17:33.626659] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.229 [2024-04-26 14:17:33.626696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.229 [2024-04-26 14:17:33.638778] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.229 [2024-04-26 14:17:33.638808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.229 [2024-04-26 14:17:33.651183] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.229 [2024-04-26 14:17:33.651221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.229 [2024-04-26 14:17:33.663156] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.229 [2024-04-26 14:17:33.663185] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.229 [2024-04-26 14:17:33.675259] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.229 [2024-04-26 14:17:33.675287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.229 [2024-04-26 14:17:33.687657] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.229 [2024-04-26 14:17:33.687686] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.229 [2024-04-26 14:17:33.699831] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.229 [2024-04-26 14:17:33.699862] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.229 [2024-04-26 14:17:33.711967] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.229 [2024-04-26 14:17:33.711996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.229 [2024-04-26 14:17:33.724258] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.229 [2024-04-26 14:17:33.724287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.229 [2024-04-26 14:17:33.736652] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.229 [2024-04-26 14:17:33.736682] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.229 [2024-04-26 14:17:33.749392] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.229 [2024-04-26 14:17:33.749421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.229 [2024-04-26 14:17:33.762704] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.229 [2024-04-26 14:17:33.762734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.229 [2024-04-26 14:17:33.775316] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.229 [2024-04-26 14:17:33.775345] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.229 [2024-04-26 14:17:33.787399] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.229 [2024-04-26 14:17:33.787428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.487 [2024-04-26 14:17:33.800244] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.487 [2024-04-26 14:17:33.800273] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.487 [2024-04-26 14:17:33.812372] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.487 [2024-04-26 14:17:33.812401] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.487 [2024-04-26 14:17:33.824756] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.487 [2024-04-26 14:17:33.824785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.487 [2024-04-26 14:17:33.837366] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.487 [2024-04-26 14:17:33.837395] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.487 [2024-04-26 14:17:33.850116] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.487 [2024-04-26 14:17:33.850146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.487 [2024-04-26 14:17:33.862337] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.487 [2024-04-26 14:17:33.862366] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.487 [2024-04-26 14:17:33.875203] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.487 [2024-04-26 14:17:33.875233] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.487 [2024-04-26 14:17:33.887446] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.487 [2024-04-26 14:17:33.887483] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.487 [2024-04-26 14:17:33.899862] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.487 [2024-04-26 14:17:33.899891] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.487 [2024-04-26 14:17:33.912202] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.487 [2024-04-26 14:17:33.912231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.487 [2024-04-26 14:17:33.924787] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.487 [2024-04-26 14:17:33.924816] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.487 [2024-04-26 14:17:33.937373] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.487 [2024-04-26 14:17:33.937402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.487 [2024-04-26 14:17:33.950037] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.487 [2024-04-26 14:17:33.950066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.488 [2024-04-26 14:17:33.962515] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.488 [2024-04-26 14:17:33.962544] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.488 [2024-04-26 14:17:33.974991] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.488 [2024-04-26 14:17:33.975021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.488 [2024-04-26 14:17:33.987335] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.488 [2024-04-26 14:17:33.987364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.488 [2024-04-26 14:17:33.999741] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.488 [2024-04-26 14:17:33.999770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.488 [2024-04-26 14:17:34.011805] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.488 [2024-04-26 14:17:34.011834] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.488 [2024-04-26 14:17:34.023976] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.488 [2024-04-26 14:17:34.024006] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.488 [2024-04-26 14:17:34.036622] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.488 [2024-04-26 14:17:34.036659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.488 [2024-04-26 14:17:34.048985] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.488 [2024-04-26 14:17:34.049014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-04-26 14:17:34.061760] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-04-26 14:17:34.061791] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.745 [2024-04-26 14:17:34.074181] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.745 [2024-04-26 14:17:34.074212] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.746 [2024-04-26 14:17:34.087119] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.746 [2024-04-26 14:17:34.087149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.746 [2024-04-26 14:17:34.099828] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.746 [2024-04-26 14:17:34.099857] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.746 [2024-04-26 14:17:34.112655] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.746 [2024-04-26 14:17:34.112684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.746 [2024-04-26 14:17:34.124897] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.746 [2024-04-26 14:17:34.124935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.746 [2024-04-26 14:17:34.137539] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.746 [2024-04-26 14:17:34.137569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.746 [2024-04-26 14:17:34.150034] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.746 [2024-04-26 14:17:34.150063] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.746 [2024-04-26 14:17:34.162499] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.746 [2024-04-26 14:17:34.162529] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.746 [2024-04-26 14:17:34.175241] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.746 [2024-04-26 14:17:34.175270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.746 [2024-04-26 14:17:34.187763] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.746 [2024-04-26 14:17:34.187792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.746 [2024-04-26 14:17:34.199925] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.746 [2024-04-26 14:17:34.199954] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.746 [2024-04-26 14:17:34.212352] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.746 [2024-04-26 14:17:34.212381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.746 [2024-04-26 14:17:34.225091] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.746 [2024-04-26 14:17:34.225121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.746 [2024-04-26 14:17:34.237535] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.746 [2024-04-26 14:17:34.237565] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.746 [2024-04-26 14:17:34.250130] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.746 [2024-04-26 14:17:34.250159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.746 [2024-04-26 14:17:34.262577] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.746 [2024-04-26 14:17:34.262607] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.746 [2024-04-26 14:17:34.275221] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.746 [2024-04-26 14:17:34.275250] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.746 [2024-04-26 14:17:34.287824] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.746 [2024-04-26 14:17:34.287854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.746 [2024-04-26 14:17:34.300459] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.746 [2024-04-26 14:17:34.300488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:52.746 [2024-04-26 14:17:34.313850] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:52.746 [2024-04-26 14:17:34.313881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.008 [2024-04-26 14:17:34.327310] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.008 [2024-04-26 14:17:34.327339] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.008 [2024-04-26 14:17:34.339732] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.008 [2024-04-26 14:17:34.339762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.008 [2024-04-26 14:17:34.352136] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.008 [2024-04-26 14:17:34.352165] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.008 [2024-04-26 14:17:34.364946] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.008 [2024-04-26 14:17:34.364983] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.008 [2024-04-26 14:17:34.377016] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.008 [2024-04-26 14:17:34.377046] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.008 [2024-04-26 14:17:34.389584] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.008 [2024-04-26 14:17:34.389613] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.008 [2024-04-26 14:17:34.402339] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.008 [2024-04-26 14:17:34.402368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.008 [2024-04-26 14:17:34.415409] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.008 [2024-04-26 14:17:34.415437] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.008 [2024-04-26 14:17:34.427595] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.009 [2024-04-26 14:17:34.427625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.009 [2024-04-26 14:17:34.439749] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.009 [2024-04-26 14:17:34.439778] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.009 [2024-04-26 14:17:34.452379] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.009 [2024-04-26 14:17:34.452407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.009 [2024-04-26 14:17:34.464796] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.009 [2024-04-26 14:17:34.464825] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.009 [2024-04-26 14:17:34.477585] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.009 [2024-04-26 14:17:34.477615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.009 [2024-04-26 14:17:34.490126] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.009 [2024-04-26 14:17:34.490154] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.009 [2024-04-26 14:17:34.502695] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.009 [2024-04-26 14:17:34.502723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.009 [2024-04-26 14:17:34.515084] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.009 [2024-04-26 14:17:34.515114] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.009 [2024-04-26 14:17:34.527559] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.009 [2024-04-26 14:17:34.527588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.009 [2024-04-26 14:17:34.539993] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.009 [2024-04-26 14:17:34.540022] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.009 [2024-04-26 14:17:34.552556] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.009 [2024-04-26 14:17:34.552585] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.009 [2024-04-26 14:17:34.565209] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.009 [2024-04-26 14:17:34.565238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.578218] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.578248] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.590772] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.590802] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.603177] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.603206] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.615616] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.615652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.627845] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.627874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.640427] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.640456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.654804] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.654833] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.666151] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.666180] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.679351] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.679380] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.691672] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.691701] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.704192] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.704220] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.716389] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.716418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.728779] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.728808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.741422] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.741451] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.753971] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.754000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.766542] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.766571] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.779340] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.779369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.791989] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.792019] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.804696] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.804725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.817358] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.817387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.269 [2024-04-26 14:17:34.830034] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.269 [2024-04-26 14:17:34.830063] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.527 [2024-04-26 14:17:34.843590] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.527 [2024-04-26 14:17:34.843620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.527 [2024-04-26 14:17:34.856744] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.527 [2024-04-26 14:17:34.856773] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.527 [2024-04-26 14:17:34.869240] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.527 [2024-04-26 14:17:34.869269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.528 [2024-04-26 14:17:34.882175] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.528 [2024-04-26 14:17:34.882205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.528 [2024-04-26 14:17:34.894439] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.528 [2024-04-26 14:17:34.894468] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.528 [2024-04-26 14:17:34.906854] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.528 [2024-04-26 14:17:34.906883] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.528 [2024-04-26 14:17:34.919304] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.528 [2024-04-26 14:17:34.919333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.528 [2024-04-26 14:17:34.931564] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.528 [2024-04-26 14:17:34.931593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.528 [2024-04-26 14:17:34.944055] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.528 [2024-04-26 14:17:34.944085] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.528 [2024-04-26 14:17:34.956521] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.528 [2024-04-26 14:17:34.956550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.528 [2024-04-26 14:17:34.969549] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.528 [2024-04-26 14:17:34.969579] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.528 [2024-04-26 14:17:34.981978] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.528 [2024-04-26 14:17:34.982008] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.528 [2024-04-26 14:17:34.994298] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.528 [2024-04-26 14:17:34.994327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.528 [2024-04-26 14:17:35.006683] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.528 [2024-04-26 14:17:35.006713] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.528 [2024-04-26 14:17:35.018864] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.528 [2024-04-26 14:17:35.018893] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.528 [2024-04-26 14:17:35.031393] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.528 [2024-04-26 14:17:35.031423] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.528 [2024-04-26 14:17:35.043662] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.528 [2024-04-26 14:17:35.043690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.528 [2024-04-26 14:17:35.055946] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.528 [2024-04-26 14:17:35.055976] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.528 [2024-04-26 14:17:35.067645] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.528 [2024-04-26 14:17:35.067676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.528 [2024-04-26 14:17:35.079806] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.528 [2024-04-26 14:17:35.079835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.528 [2024-04-26 14:17:35.092243] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.528 [2024-04-26 14:17:35.092273] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.786 [2024-04-26 14:17:35.105493] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.786 [2024-04-26 14:17:35.105524] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.786 [2024-04-26 14:17:35.118443] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.786 [2024-04-26 14:17:35.118472] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.787 [2024-04-26 14:17:35.131353] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.787 [2024-04-26 14:17:35.131383] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.787 [2024-04-26 14:17:35.143612] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.787 [2024-04-26 14:17:35.143651] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.787 [2024-04-26 14:17:35.156063] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.787 [2024-04-26 14:17:35.156093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.787 [2024-04-26 14:17:35.168417] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.787 [2024-04-26 14:17:35.168445] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.787 [2024-04-26 14:17:35.180660] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.787 [2024-04-26 14:17:35.180689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.787 [2024-04-26 14:17:35.192840] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.787 [2024-04-26 14:17:35.192869] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.787 [2024-04-26 14:17:35.205086] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.787 [2024-04-26 14:17:35.205115] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.787 [2024-04-26 14:17:35.217747] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.787 [2024-04-26 14:17:35.217776] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.787 [2024-04-26 14:17:35.230206] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.787 [2024-04-26 14:17:35.230235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.787 [2024-04-26 14:17:35.242404] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.787 [2024-04-26 14:17:35.242434] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.787 [2024-04-26 14:17:35.254601] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.787 [2024-04-26 14:17:35.254644] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.787 [2024-04-26 14:17:35.266763] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.787 [2024-04-26 14:17:35.266792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.787 [2024-04-26 14:17:35.279065] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.787 [2024-04-26 14:17:35.279094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.787 [2024-04-26 14:17:35.291841] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.787 [2024-04-26 14:17:35.291870] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.787 [2024-04-26 14:17:35.304307] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.787 [2024-04-26 14:17:35.304336] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.787 [2024-04-26 14:17:35.316710] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.787 [2024-04-26 14:17:35.316740] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.787 [2024-04-26 14:17:35.328970] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.787 [2024-04-26 14:17:35.328999] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.787 [2024-04-26 14:17:35.341535] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.787 [2024-04-26 14:17:35.341570] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:53.787 [2024-04-26 14:17:35.354264] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:53.787 [2024-04-26 14:17:35.354295] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.046 [2024-04-26 14:17:35.366873] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.046 [2024-04-26 14:17:35.366905] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.046 [2024-04-26 14:17:35.379582] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.046 [2024-04-26 14:17:35.379613] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.046 [2024-04-26 14:17:35.392589] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.046 [2024-04-26 14:17:35.392619] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.046 [2024-04-26 14:17:35.405620] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.046 [2024-04-26 14:17:35.405659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.046 [2024-04-26 14:17:35.418158] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.046 [2024-04-26 14:17:35.418187] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.046 [2024-04-26 14:17:35.430920] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.046 [2024-04-26 14:17:35.430950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.046 [2024-04-26 14:17:35.443378] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.046 [2024-04-26 14:17:35.443408] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.046 [2024-04-26 14:17:35.455788] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.046 [2024-04-26 14:17:35.455817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.046 [2024-04-26 14:17:35.468152] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.046 [2024-04-26 14:17:35.468182] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.046 [2024-04-26 14:17:35.480607] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.046 [2024-04-26 14:17:35.480644] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.046 [2024-04-26 14:17:35.493086] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.046 [2024-04-26 14:17:35.493115] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.046 [2024-04-26 14:17:35.505499] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.046 [2024-04-26 14:17:35.505529] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.046 [2024-04-26 14:17:35.518433] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.046 [2024-04-26 14:17:35.518463] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.046 [2024-04-26 14:17:35.531054] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.046 [2024-04-26 14:17:35.531085] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.046 [2024-04-26 14:17:35.543401] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.046 [2024-04-26 14:17:35.543447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.046 [2024-04-26 14:17:35.555702] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.046 [2024-04-26 14:17:35.555731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.046 [2024-04-26 14:17:35.567911] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.046 [2024-04-26 14:17:35.567940] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.046 [2024-04-26 14:17:35.580158] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.046 [2024-04-26 14:17:35.580187] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.046 [2024-04-26 14:17:35.592870] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.046 [2024-04-26 14:17:35.592899] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.046 [2024-04-26 14:17:35.605715] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.046 [2024-04-26 14:17:35.605744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.618774] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.618805] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.631468] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.631497] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.644086] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.644115] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.656700] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.656729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.669261] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.669290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.681743] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.681772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.694383] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.694412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.707192] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.707221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.719546] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.719576] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.731888] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.731917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.744642] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.744671] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.757106] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.757135] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.769722] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.769752] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.782011] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.782054] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.795010] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.795039] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.807688] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.807718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.820332] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.820361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.833168] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.833197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.846148] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.846177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.858832] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.858861] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.304 [2024-04-26 14:17:35.871580] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.304 [2024-04-26 14:17:35.871625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.563 [2024-04-26 14:17:35.884970] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.563 [2024-04-26 14:17:35.885000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.563 [2024-04-26 14:17:35.897653] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.563 [2024-04-26 14:17:35.897682] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.563 [2024-04-26 14:17:35.910430] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.563 [2024-04-26 14:17:35.910458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.563 [2024-04-26 14:17:35.923153] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.563 [2024-04-26 14:17:35.923182] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.563 [2024-04-26 14:17:35.935755] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.563 [2024-04-26 14:17:35.935784] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.563 [2024-04-26 14:17:35.948877] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.563 [2024-04-26 14:17:35.948907] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.563 [2024-04-26 14:17:35.961506] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.563 [2024-04-26 14:17:35.961535] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.563 [2024-04-26 14:17:35.974281] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.563 [2024-04-26 14:17:35.974310] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.563 [2024-04-26 14:17:35.986993] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.563 [2024-04-26 14:17:35.987023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.563 [2024-04-26 14:17:36.000188] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.563 [2024-04-26 14:17:36.000218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.563 [2024-04-26 14:17:36.012285] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.563 [2024-04-26 14:17:36.012315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.563 [2024-04-26 14:17:36.024983] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.563 [2024-04-26 14:17:36.025021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.564 [2024-04-26 14:17:36.037946] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.564 [2024-04-26 14:17:36.037974] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.564 [2024-04-26 14:17:36.050414] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.564 [2024-04-26 14:17:36.050443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.564 [2024-04-26 14:17:36.062505] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.564 [2024-04-26 14:17:36.062534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.564 [2024-04-26 14:17:36.074688] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.564 [2024-04-26 14:17:36.074718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.564 [2024-04-26 14:17:36.086659] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.564 [2024-04-26 14:17:36.086689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.564 [2024-04-26 14:17:36.099035] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.564 [2024-04-26 14:17:36.099065] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.564 [2024-04-26 14:17:36.111512] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.564 [2024-04-26 14:17:36.111541] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.564 [2024-04-26 14:17:36.123981] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.564 [2024-04-26 14:17:36.124010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.822 [2024-04-26 14:17:36.137136] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.822 [2024-04-26 14:17:36.137167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.822 [2024-04-26 14:17:36.149498] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.822 [2024-04-26 14:17:36.149528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.822 [2024-04-26 14:17:36.161833] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.822 [2024-04-26 14:17:36.161864] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.822 [2024-04-26 14:17:36.174315] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.822 [2024-04-26 14:17:36.174344] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.822 [2024-04-26 14:17:36.186783] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.822 [2024-04-26 14:17:36.186813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.822 [2024-04-26 14:17:36.199082] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.822 [2024-04-26 14:17:36.199111] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.822 [2024-04-26 14:17:36.211310] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.822 [2024-04-26 14:17:36.211339] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.822 [2024-04-26 14:17:36.223795] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.822 [2024-04-26 14:17:36.223824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.822 [2024-04-26 14:17:36.236220] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.822 [2024-04-26 14:17:36.236249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.822 [2024-04-26 14:17:36.248467] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.822 [2024-04-26 14:17:36.248496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.822 [2024-04-26 14:17:36.261083] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.822 [2024-04-26 14:17:36.261132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.822 [2024-04-26 14:17:36.273629] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.822 [2024-04-26 14:17:36.273667] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.822 [2024-04-26 14:17:36.286286] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.822 [2024-04-26 14:17:36.286315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.822 [2024-04-26 14:17:36.298660] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.822 [2024-04-26 14:17:36.298696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.822 [2024-04-26 14:17:36.311302] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.822 [2024-04-26 14:17:36.311332] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.822 [2024-04-26 14:17:36.324251] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.822 [2024-04-26 14:17:36.324280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.822 [2024-04-26 14:17:36.337478] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.822 [2024-04-26 14:17:36.337509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.823 [2024-04-26 14:17:36.350097] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.823 [2024-04-26 14:17:36.350126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.823 [2024-04-26 14:17:36.362812] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.823 [2024-04-26 14:17:36.362842] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.823 [2024-04-26 14:17:36.375173] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.823 [2024-04-26 14:17:36.375202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:54.823 [2024-04-26 14:17:36.387928] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:54.823 [2024-04-26 14:17:36.387967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-04-26 14:17:36.400895] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-04-26 14:17:36.400925] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-04-26 14:17:36.413691] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-04-26 14:17:36.413721] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-04-26 14:17:36.426663] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-04-26 14:17:36.426692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-04-26 14:17:36.439213] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-04-26 14:17:36.439243] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-04-26 14:17:36.451851] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-04-26 14:17:36.451881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-04-26 14:17:36.464715] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-04-26 14:17:36.464753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-04-26 14:17:36.477390] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-04-26 14:17:36.477419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-04-26 14:17:36.490170] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-04-26 14:17:36.490199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-04-26 14:17:36.502912] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-04-26 14:17:36.502941] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-04-26 14:17:36.515686] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-04-26 14:17:36.515715] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.081 [2024-04-26 14:17:36.528323] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.081 [2024-04-26 14:17:36.528352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.082 [2024-04-26 14:17:36.540749] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.082 [2024-04-26 14:17:36.540778] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.082 [2024-04-26 14:17:36.552893] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.082 [2024-04-26 14:17:36.552921] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.082 [2024-04-26 14:17:36.565318] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.082 [2024-04-26 14:17:36.565347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.082 [2024-04-26 14:17:36.577771] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.082 [2024-04-26 14:17:36.577800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.082 [2024-04-26 14:17:36.590642] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.082 [2024-04-26 14:17:36.590672] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.082 [2024-04-26 14:17:36.603273] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.082 [2024-04-26 14:17:36.603302] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.082 [2024-04-26 14:17:36.615849] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.082 [2024-04-26 14:17:36.615878] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.082 [2024-04-26 14:17:36.628414] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.082 [2024-04-26 14:17:36.628443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.082 [2024-04-26 14:17:36.641020] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.082 [2024-04-26 14:17:36.641049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.654263] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.654294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.666575] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.666604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.678745] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.678775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.694845] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.694881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 00:12:55.341 Latency(us) 00:12:55.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.341 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:55.341 Nvme1n1 : 5.01 10133.62 79.17 0.00 0.00 12612.84 5582.70 20874.43 00:12:55.341 =================================================================================================================== 00:12:55.341 Total : 10133.62 79.17 0.00 0.00 12612.84 5582.70 20874.43 00:12:55.341 [2024-04-26 14:17:36.702945] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.702972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.710928] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.710954] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.718951] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.718978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.727046] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.727101] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.735061] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.735110] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.743086] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.743136] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.751119] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.751170] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.759135] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.759185] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.767159] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.767211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.775170] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.775219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.783207] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.783257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.791228] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.791278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.799223] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.799261] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.807193] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.807216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.815234] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.815264] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.823244] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.823269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.831268] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.831296] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.839362] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.839413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.847357] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.847399] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.855325] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.855349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.863369] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.863397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.871380] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.871407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.879399] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.879423] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.887497] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.887547] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.895492] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.895532] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.341 [2024-04-26 14:17:36.903458] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.341 [2024-04-26 14:17:36.903480] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.600 [2024-04-26 14:17:36.911496] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.600 [2024-04-26 14:17:36.911525] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.600 [2024-04-26 14:17:36.919531] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:55.600 [2024-04-26 14:17:36.919559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:55.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3138171) - No such process 00:12:55.600 14:17:36 -- target/zcopy.sh@49 -- # wait 3138171 00:12:55.600 14:17:36 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.600 14:17:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.600 14:17:36 -- common/autotest_common.sh@10 -- # set +x 00:12:55.600 14:17:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.600 14:17:36 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:55.600 14:17:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.600 14:17:36 -- common/autotest_common.sh@10 -- # set +x 00:12:55.600 delay0 00:12:55.600 14:17:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.600 14:17:36 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:55.600 14:17:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:55.600 14:17:36 -- common/autotest_common.sh@10 -- # set +x 00:12:55.600 14:17:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:55.600 14:17:36 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:55.600 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.600 [2024-04-26 14:17:37.079773] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:03.716 Initializing NVMe Controllers 00:13:03.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:03.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:03.716 Initialization complete. Launching workers. 00:13:03.716 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 217, failed: 22363 00:13:03.716 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22424, failed to submit 156 00:13:03.716 success 22371, unsuccess 53, failed 0 00:13:03.716 14:17:44 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:03.716 14:17:44 -- target/zcopy.sh@60 -- # nvmftestfini 00:13:03.716 14:17:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:03.716 14:17:44 -- nvmf/common.sh@117 -- # sync 00:13:03.716 14:17:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:03.716 14:17:44 -- nvmf/common.sh@120 -- # set +e 00:13:03.716 14:17:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:03.716 14:17:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:03.716 rmmod nvme_tcp 00:13:03.716 rmmod nvme_fabrics 00:13:03.716 rmmod nvme_keyring 00:13:03.716 14:17:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:03.717 14:17:44 -- nvmf/common.sh@124 -- # set -e 00:13:03.717 14:17:44 -- nvmf/common.sh@125 -- # return 0 00:13:03.717 14:17:44 -- nvmf/common.sh@478 -- # '[' -n 3137152 ']' 00:13:03.717 14:17:44 -- nvmf/common.sh@479 -- # killprocess 3137152 00:13:03.717 14:17:44 -- common/autotest_common.sh@936 -- # '[' -z 3137152 ']' 00:13:03.717 14:17:44 -- common/autotest_common.sh@940 -- # kill -0 3137152 00:13:03.717 14:17:44 -- common/autotest_common.sh@941 -- # uname 00:13:03.717 14:17:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:03.717 14:17:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3137152 00:13:03.717 14:17:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:03.717 14:17:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:03.717 14:17:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3137152' 00:13:03.717 killing process with pid 3137152 00:13:03.717 14:17:44 -- common/autotest_common.sh@955 -- # kill 3137152 00:13:03.717 14:17:44 -- common/autotest_common.sh@960 -- # wait 3137152 00:13:03.717 14:17:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:03.717 14:17:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:03.717 14:17:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:03.717 14:17:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:03.717 14:17:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:03.717 14:17:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.717 14:17:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.717 14:17:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.099 14:17:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:05.099 00:13:05.099 real 0m28.305s 00:13:05.099 user 0m40.763s 00:13:05.099 sys 0m8.643s 00:13:05.099 14:17:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:05.099 14:17:46 -- common/autotest_common.sh@10 -- # set +x 00:13:05.099 ************************************ 00:13:05.099 END TEST nvmf_zcopy 00:13:05.099 ************************************ 00:13:05.099 14:17:46 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:05.099 14:17:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:05.099 14:17:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:05.099 14:17:46 -- common/autotest_common.sh@10 -- # set +x 00:13:05.359 ************************************ 00:13:05.359 START TEST nvmf_nmic 00:13:05.359 ************************************ 00:13:05.359 14:17:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:05.359 * Looking for test storage... 00:13:05.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:05.359 14:17:46 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:05.359 14:17:46 -- nvmf/common.sh@7 -- # uname -s 00:13:05.359 14:17:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.359 14:17:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.359 14:17:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.359 14:17:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.359 14:17:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.359 14:17:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.359 14:17:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.359 14:17:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.359 14:17:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.359 14:17:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.359 14:17:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:05.359 14:17:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:05.359 14:17:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.359 14:17:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.359 14:17:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:05.359 14:17:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:05.359 14:17:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:05.359 14:17:46 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.359 14:17:46 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.359 14:17:46 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.359 14:17:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.359 14:17:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.359 14:17:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.359 14:17:46 -- paths/export.sh@5 -- # export PATH 00:13:05.359 14:17:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.359 14:17:46 -- nvmf/common.sh@47 -- # : 0 00:13:05.359 14:17:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:05.359 14:17:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:05.359 14:17:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:05.359 14:17:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.359 14:17:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.359 14:17:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:05.359 14:17:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:05.359 14:17:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:05.359 14:17:46 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:05.359 14:17:46 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:05.359 14:17:46 -- target/nmic.sh@14 -- # nvmftestinit 00:13:05.359 14:17:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:05.359 14:17:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.359 14:17:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:05.359 14:17:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:05.359 14:17:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:05.359 14:17:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.359 14:17:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.359 14:17:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.359 14:17:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:05.359 14:17:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:05.359 14:17:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:05.359 14:17:46 -- common/autotest_common.sh@10 -- # set +x 00:13:07.274 14:17:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:07.274 14:17:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:07.274 14:17:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:07.274 14:17:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:07.274 14:17:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:07.274 14:17:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:07.274 14:17:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:07.274 14:17:48 -- nvmf/common.sh@295 -- # net_devs=() 00:13:07.274 14:17:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:07.274 14:17:48 -- nvmf/common.sh@296 -- # e810=() 00:13:07.274 14:17:48 -- nvmf/common.sh@296 -- # local -ga e810 00:13:07.274 14:17:48 -- nvmf/common.sh@297 -- # x722=() 00:13:07.274 14:17:48 -- nvmf/common.sh@297 -- # local -ga x722 00:13:07.274 14:17:48 -- nvmf/common.sh@298 -- # mlx=() 00:13:07.274 14:17:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:07.274 14:17:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.274 14:17:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.274 14:17:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.274 14:17:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.274 14:17:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.274 14:17:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.274 14:17:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.274 14:17:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.274 14:17:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.274 14:17:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.274 14:17:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.274 14:17:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:07.274 14:17:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:07.274 14:17:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:07.274 14:17:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:07.274 14:17:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:07.274 14:17:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:07.274 14:17:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:07.274 14:17:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:07.274 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:07.274 14:17:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:07.274 14:17:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:07.274 14:17:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.274 14:17:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.274 14:17:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:07.274 14:17:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:07.274 14:17:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:07.274 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:07.274 14:17:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:07.274 14:17:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:07.274 14:17:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.274 14:17:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.274 14:17:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:07.274 14:17:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:07.275 14:17:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:07.275 14:17:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:07.275 14:17:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:07.275 14:17:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.275 14:17:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:07.275 14:17:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.275 14:17:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:07.275 Found net devices under 0000:08:00.0: cvl_0_0 00:13:07.275 14:17:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.275 14:17:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:07.275 14:17:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.275 14:17:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:07.275 14:17:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.275 14:17:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:07.275 Found net devices under 0000:08:00.1: cvl_0_1 00:13:07.275 14:17:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.275 14:17:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:07.275 14:17:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:07.275 14:17:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:07.275 14:17:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:07.275 14:17:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:07.275 14:17:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.275 14:17:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.275 14:17:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:07.275 14:17:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:07.275 14:17:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:07.275 14:17:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:07.275 14:17:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:07.275 14:17:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:07.275 14:17:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.275 14:17:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:07.275 14:17:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:07.275 14:17:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:07.275 14:17:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:07.275 14:17:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:07.275 14:17:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:07.275 14:17:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:07.275 14:17:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:07.275 14:17:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:07.275 14:17:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:07.275 14:17:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:07.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:13:07.275 00:13:07.275 --- 10.0.0.2 ping statistics --- 00:13:07.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.275 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:13:07.275 14:17:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:07.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:13:07.275 00:13:07.275 --- 10.0.0.1 ping statistics --- 00:13:07.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.275 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:13:07.275 14:17:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.275 14:17:48 -- nvmf/common.sh@411 -- # return 0 00:13:07.275 14:17:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:07.275 14:17:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.275 14:17:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:07.275 14:17:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:07.275 14:17:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.275 14:17:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:07.275 14:17:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:07.275 14:17:48 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:07.275 14:17:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:07.275 14:17:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:07.275 14:17:48 -- common/autotest_common.sh@10 -- # set +x 00:13:07.275 14:17:48 -- nvmf/common.sh@470 -- # nvmfpid=3140783 00:13:07.275 14:17:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:07.275 14:17:48 -- nvmf/common.sh@471 -- # waitforlisten 3140783 00:13:07.275 14:17:48 -- common/autotest_common.sh@817 -- # '[' -z 3140783 ']' 00:13:07.275 14:17:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.275 14:17:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:07.275 14:17:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.275 14:17:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:07.275 14:17:48 -- common/autotest_common.sh@10 -- # set +x 00:13:07.275 [2024-04-26 14:17:48.544270] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:13:07.275 [2024-04-26 14:17:48.544362] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.275 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.275 [2024-04-26 14:17:48.609312] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.275 [2024-04-26 14:17:48.726175] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.275 [2024-04-26 14:17:48.726233] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.275 [2024-04-26 14:17:48.726248] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.275 [2024-04-26 14:17:48.726261] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.275 [2024-04-26 14:17:48.726273] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.275 [2024-04-26 14:17:48.726340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.275 [2024-04-26 14:17:48.726391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.275 [2024-04-26 14:17:48.726442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.275 [2024-04-26 14:17:48.726445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.275 14:17:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:07.275 14:17:48 -- common/autotest_common.sh@850 -- # return 0 00:13:07.275 14:17:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:07.275 14:17:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:07.275 14:17:48 -- common/autotest_common.sh@10 -- # set +x 00:13:07.533 14:17:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.533 14:17:48 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:07.533 14:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.533 14:17:48 -- common/autotest_common.sh@10 -- # set +x 00:13:07.533 [2024-04-26 14:17:48.862152] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.533 14:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.533 14:17:48 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:07.533 14:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.533 14:17:48 -- common/autotest_common.sh@10 -- # set +x 00:13:07.533 Malloc0 00:13:07.533 14:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.533 14:17:48 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:07.533 14:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.533 14:17:48 -- common/autotest_common.sh@10 -- # set +x 00:13:07.533 14:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.533 14:17:48 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:07.533 14:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.533 14:17:48 -- common/autotest_common.sh@10 -- # set +x 00:13:07.533 14:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.533 14:17:48 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.533 14:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.533 14:17:48 -- common/autotest_common.sh@10 -- # set +x 00:13:07.533 [2024-04-26 14:17:48.912558] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.533 14:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.533 14:17:48 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:07.533 test case1: single bdev can't be used in multiple subsystems 00:13:07.533 14:17:48 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:07.533 14:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.533 14:17:48 -- common/autotest_common.sh@10 -- # set +x 00:13:07.533 14:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.533 14:17:48 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:07.533 14:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.533 14:17:48 -- common/autotest_common.sh@10 -- # set +x 00:13:07.533 14:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.533 14:17:48 -- target/nmic.sh@28 -- # nmic_status=0 00:13:07.533 14:17:48 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:07.533 14:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.533 14:17:48 -- common/autotest_common.sh@10 -- # set +x 00:13:07.533 [2024-04-26 14:17:48.936436] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:07.534 [2024-04-26 14:17:48.936468] subsystem.c:1934:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:07.534 [2024-04-26 14:17:48.936484] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.534 request: 00:13:07.534 { 00:13:07.534 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:07.534 "namespace": { 00:13:07.534 "bdev_name": "Malloc0", 00:13:07.534 "no_auto_visible": false 00:13:07.534 }, 00:13:07.534 "method": "nvmf_subsystem_add_ns", 00:13:07.534 "req_id": 1 00:13:07.534 } 00:13:07.534 Got JSON-RPC error response 00:13:07.534 response: 00:13:07.534 { 00:13:07.534 "code": -32602, 00:13:07.534 "message": "Invalid parameters" 00:13:07.534 } 00:13:07.534 14:17:48 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:13:07.534 14:17:48 -- target/nmic.sh@29 -- # nmic_status=1 00:13:07.534 14:17:48 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:07.534 14:17:48 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:07.534 Adding namespace failed - expected result. 00:13:07.534 14:17:48 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:07.534 test case2: host connect to nvmf target in multiple paths 00:13:07.534 14:17:48 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:07.534 14:17:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:07.534 14:17:48 -- common/autotest_common.sh@10 -- # set +x 00:13:07.534 [2024-04-26 14:17:48.944544] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:07.534 14:17:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:07.534 14:17:48 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.099 14:17:49 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:08.358 14:17:49 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.358 14:17:49 -- common/autotest_common.sh@1184 -- # local i=0 00:13:08.358 14:17:49 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.358 14:17:49 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:08.358 14:17:49 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:10.884 14:17:51 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:10.884 14:17:51 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:10.884 14:17:51 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.884 14:17:51 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:10.884 14:17:51 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.884 14:17:51 -- common/autotest_common.sh@1194 -- # return 0 00:13:10.884 14:17:51 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:10.884 [global] 00:13:10.884 thread=1 00:13:10.884 invalidate=1 00:13:10.884 rw=write 00:13:10.884 time_based=1 00:13:10.884 runtime=1 00:13:10.884 ioengine=libaio 00:13:10.884 direct=1 00:13:10.884 bs=4096 00:13:10.884 iodepth=1 00:13:10.884 norandommap=0 00:13:10.884 numjobs=1 00:13:10.884 00:13:10.884 verify_dump=1 00:13:10.884 verify_backlog=512 00:13:10.884 verify_state_save=0 00:13:10.884 do_verify=1 00:13:10.884 verify=crc32c-intel 00:13:10.884 [job0] 00:13:10.884 filename=/dev/nvme0n1 00:13:10.884 Could not set queue depth (nvme0n1) 00:13:10.884 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:10.884 fio-3.35 00:13:10.884 Starting 1 thread 00:13:11.817 00:13:11.817 job0: (groupid=0, jobs=1): err= 0: pid=3141266: Fri Apr 26 14:17:53 2024 00:13:11.817 read: IOPS=21, BW=85.3KiB/s (87.3kB/s)(88.0KiB/1032msec) 00:13:11.817 slat (nsec): min=9201, max=31590, avg=20099.05, stdev=7346.19 00:13:11.817 clat (usec): min=40890, max=42038, avg=41340.02, stdev=503.29 00:13:11.817 lat (usec): min=40921, max=42056, avg=41360.12, stdev=505.01 00:13:11.817 clat percentiles (usec): 00:13:11.817 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:11.817 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:11.817 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:13:11.817 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:11.817 | 99.99th=[42206] 00:13:11.817 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:13:11.817 slat (nsec): min=7601, max=45873, avg=19319.15, stdev=5138.59 00:13:11.817 clat (usec): min=153, max=364, avg=212.05, stdev=28.75 00:13:11.817 lat (usec): min=165, max=405, avg=231.37, stdev=30.25 00:13:11.817 clat percentiles (usec): 00:13:11.817 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 190], 00:13:11.817 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 208], 60.00th=[ 215], 00:13:11.817 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 249], 95.00th=[ 265], 00:13:11.817 | 99.00th=[ 289], 99.50th=[ 359], 99.90th=[ 363], 99.95th=[ 363], 00:13:11.817 | 99.99th=[ 363] 00:13:11.817 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:11.817 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:11.817 lat (usec) : 250=87.08%, 500=8.80% 00:13:11.817 lat (msec) : 50=4.12% 00:13:11.817 cpu : usr=0.97%, sys=0.97%, ctx=534, majf=0, minf=1 00:13:11.817 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:11.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:11.817 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:11.817 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:11.817 00:13:11.817 Run status group 0 (all jobs): 00:13:11.817 READ: bw=85.3KiB/s (87.3kB/s), 85.3KiB/s-85.3KiB/s (87.3kB/s-87.3kB/s), io=88.0KiB (90.1kB), run=1032-1032msec 00:13:11.817 WRITE: bw=1984KiB/s (2032kB/s), 1984KiB/s-1984KiB/s (2032kB/s-2032kB/s), io=2048KiB (2097kB), run=1032-1032msec 00:13:11.817 00:13:11.817 Disk stats (read/write): 00:13:11.817 nvme0n1: ios=68/512, merge=0/0, ticks=775/103, in_queue=878, util=91.78% 00:13:11.817 14:17:53 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:11.817 14:17:53 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:11.817 14:17:53 -- common/autotest_common.sh@1205 -- # local i=0 00:13:11.817 14:17:53 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:11.817 14:17:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.074 14:17:53 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:12.074 14:17:53 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.074 14:17:53 -- common/autotest_common.sh@1217 -- # return 0 00:13:12.074 14:17:53 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:12.074 14:17:53 -- target/nmic.sh@53 -- # nvmftestfini 00:13:12.074 14:17:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:12.074 14:17:53 -- nvmf/common.sh@117 -- # sync 00:13:12.074 14:17:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:12.074 14:17:53 -- nvmf/common.sh@120 -- # set +e 00:13:12.074 14:17:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:12.074 14:17:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:12.074 rmmod nvme_tcp 00:13:12.074 rmmod nvme_fabrics 00:13:12.074 rmmod nvme_keyring 00:13:12.074 14:17:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:12.074 14:17:53 -- nvmf/common.sh@124 -- # set -e 00:13:12.074 14:17:53 -- nvmf/common.sh@125 -- # return 0 00:13:12.074 14:17:53 -- nvmf/common.sh@478 -- # '[' -n 3140783 ']' 00:13:12.074 14:17:53 -- nvmf/common.sh@479 -- # killprocess 3140783 00:13:12.074 14:17:53 -- common/autotest_common.sh@936 -- # '[' -z 3140783 ']' 00:13:12.074 14:17:53 -- common/autotest_common.sh@940 -- # kill -0 3140783 00:13:12.074 14:17:53 -- common/autotest_common.sh@941 -- # uname 00:13:12.074 14:17:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:12.074 14:17:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3140783 00:13:12.074 14:17:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:12.074 14:17:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:12.074 14:17:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3140783' 00:13:12.074 killing process with pid 3140783 00:13:12.074 14:17:53 -- common/autotest_common.sh@955 -- # kill 3140783 00:13:12.074 14:17:53 -- common/autotest_common.sh@960 -- # wait 3140783 00:13:12.334 14:17:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:12.334 14:17:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:12.334 14:17:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:12.334 14:17:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:12.334 14:17:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:12.334 14:17:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.334 14:17:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:12.334 14:17:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.242 14:17:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:14.243 00:13:14.243 real 0m9.070s 00:13:14.243 user 0m20.351s 00:13:14.243 sys 0m1.983s 00:13:14.243 14:17:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:14.243 14:17:55 -- common/autotest_common.sh@10 -- # set +x 00:13:14.243 ************************************ 00:13:14.243 END TEST nvmf_nmic 00:13:14.243 ************************************ 00:13:14.243 14:17:55 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:14.243 14:17:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:14.243 14:17:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:14.243 14:17:55 -- common/autotest_common.sh@10 -- # set +x 00:13:14.502 ************************************ 00:13:14.502 START TEST nvmf_fio_target 00:13:14.502 ************************************ 00:13:14.502 14:17:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:14.502 * Looking for test storage... 00:13:14.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.502 14:17:55 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.502 14:17:55 -- nvmf/common.sh@7 -- # uname -s 00:13:14.502 14:17:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.502 14:17:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.502 14:17:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.502 14:17:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.502 14:17:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.502 14:17:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.502 14:17:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.502 14:17:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.502 14:17:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.502 14:17:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.502 14:17:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:14.502 14:17:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:14.502 14:17:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.502 14:17:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.502 14:17:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.502 14:17:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.502 14:17:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.502 14:17:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.502 14:17:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.502 14:17:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.502 14:17:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.503 14:17:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.503 14:17:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.503 14:17:55 -- paths/export.sh@5 -- # export PATH 00:13:14.503 14:17:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.503 14:17:55 -- nvmf/common.sh@47 -- # : 0 00:13:14.503 14:17:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:14.503 14:17:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:14.503 14:17:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.503 14:17:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.503 14:17:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.503 14:17:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:14.503 14:17:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:14.503 14:17:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:14.503 14:17:55 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:14.503 14:17:55 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:14.503 14:17:55 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:14.503 14:17:55 -- target/fio.sh@16 -- # nvmftestinit 00:13:14.503 14:17:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:14.503 14:17:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:14.503 14:17:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:14.503 14:17:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:14.503 14:17:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:14.503 14:17:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.503 14:17:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.503 14:17:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.503 14:17:55 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:14.503 14:17:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:14.503 14:17:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:14.503 14:17:55 -- common/autotest_common.sh@10 -- # set +x 00:13:16.410 14:17:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:16.410 14:17:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:16.410 14:17:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:16.410 14:17:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:16.410 14:17:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:16.410 14:17:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:16.410 14:17:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:16.410 14:17:57 -- nvmf/common.sh@295 -- # net_devs=() 00:13:16.410 14:17:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:16.410 14:17:57 -- nvmf/common.sh@296 -- # e810=() 00:13:16.410 14:17:57 -- nvmf/common.sh@296 -- # local -ga e810 00:13:16.410 14:17:57 -- nvmf/common.sh@297 -- # x722=() 00:13:16.410 14:17:57 -- nvmf/common.sh@297 -- # local -ga x722 00:13:16.410 14:17:57 -- nvmf/common.sh@298 -- # mlx=() 00:13:16.410 14:17:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:16.410 14:17:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.410 14:17:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.410 14:17:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.410 14:17:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.410 14:17:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.410 14:17:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.410 14:17:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.410 14:17:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.410 14:17:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.410 14:17:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.410 14:17:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.410 14:17:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:16.410 14:17:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:16.410 14:17:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:16.410 14:17:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:16.410 14:17:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:16.410 14:17:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:16.410 14:17:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.410 14:17:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:16.410 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:16.410 14:17:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:16.410 14:17:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:16.410 14:17:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.410 14:17:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.410 14:17:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:16.410 14:17:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.410 14:17:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:16.410 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:16.410 14:17:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:16.410 14:17:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:16.410 14:17:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.410 14:17:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.410 14:17:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:16.410 14:17:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:16.410 14:17:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:16.410 14:17:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:16.410 14:17:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.410 14:17:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.410 14:17:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:16.410 14:17:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.410 14:17:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:16.410 Found net devices under 0000:08:00.0: cvl_0_0 00:13:16.410 14:17:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.410 14:17:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.410 14:17:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.410 14:17:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:16.410 14:17:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.410 14:17:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:16.410 Found net devices under 0000:08:00.1: cvl_0_1 00:13:16.410 14:17:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.410 14:17:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:16.410 14:17:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:16.410 14:17:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:16.410 14:17:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:16.410 14:17:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:16.410 14:17:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.410 14:17:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.410 14:17:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:16.410 14:17:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:16.410 14:17:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:16.410 14:17:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:16.410 14:17:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:16.410 14:17:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:16.410 14:17:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.410 14:17:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:16.410 14:17:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:16.410 14:17:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:16.410 14:17:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:16.410 14:17:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:16.410 14:17:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:16.410 14:17:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:16.410 14:17:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:16.410 14:17:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:16.410 14:17:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:16.410 14:17:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:16.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:13:16.410 00:13:16.410 --- 10.0.0.2 ping statistics --- 00:13:16.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.410 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:13:16.410 14:17:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:16.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:13:16.410 00:13:16.410 --- 10.0.0.1 ping statistics --- 00:13:16.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.410 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:13:16.410 14:17:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.410 14:17:57 -- nvmf/common.sh@411 -- # return 0 00:13:16.411 14:17:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:16.411 14:17:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.411 14:17:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:16.411 14:17:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:16.411 14:17:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.411 14:17:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:16.411 14:17:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:16.411 14:17:57 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:16.411 14:17:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:16.411 14:17:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:16.411 14:17:57 -- common/autotest_common.sh@10 -- # set +x 00:13:16.411 14:17:57 -- nvmf/common.sh@470 -- # nvmfpid=3142882 00:13:16.411 14:17:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:16.411 14:17:57 -- nvmf/common.sh@471 -- # waitforlisten 3142882 00:13:16.411 14:17:57 -- common/autotest_common.sh@817 -- # '[' -z 3142882 ']' 00:13:16.411 14:17:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.411 14:17:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:16.411 14:17:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.411 14:17:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:16.411 14:17:57 -- common/autotest_common.sh@10 -- # set +x 00:13:16.411 [2024-04-26 14:17:57.765715] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:13:16.411 [2024-04-26 14:17:57.765818] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.411 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.411 [2024-04-26 14:17:57.832903] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.411 [2024-04-26 14:17:57.951692] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.411 [2024-04-26 14:17:57.951755] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.411 [2024-04-26 14:17:57.951771] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.411 [2024-04-26 14:17:57.951784] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.411 [2024-04-26 14:17:57.951796] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.411 [2024-04-26 14:17:57.951876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.411 [2024-04-26 14:17:57.951958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.411 [2024-04-26 14:17:57.952042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.411 [2024-04-26 14:17:57.952047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.670 14:17:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:16.670 14:17:58 -- common/autotest_common.sh@850 -- # return 0 00:13:16.670 14:17:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:16.670 14:17:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:16.670 14:17:58 -- common/autotest_common.sh@10 -- # set +x 00:13:16.670 14:17:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:16.670 14:17:58 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:16.928 [2024-04-26 14:17:58.358080] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.928 14:17:58 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:17.186 14:17:58 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:17.186 14:17:58 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:17.444 14:17:59 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:17.444 14:17:59 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.010 14:17:59 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:18.010 14:17:59 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.010 14:17:59 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:18.010 14:17:59 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:18.268 14:17:59 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.526 14:18:00 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:18.526 14:18:00 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:18.784 14:18:00 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:18.784 14:18:00 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:19.043 14:18:00 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:19.043 14:18:00 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:19.301 14:18:00 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:19.561 14:18:01 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:19.561 14:18:01 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:19.819 14:18:01 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:19.819 14:18:01 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:20.077 14:18:01 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.335 [2024-04-26 14:18:01.798827] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.335 14:18:01 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:20.593 14:18:02 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:20.850 14:18:02 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.416 14:18:02 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:21.416 14:18:02 -- common/autotest_common.sh@1184 -- # local i=0 00:13:21.416 14:18:02 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.416 14:18:02 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:13:21.416 14:18:02 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:13:21.416 14:18:02 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:23.314 14:18:04 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:23.314 14:18:04 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:23.314 14:18:04 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.314 14:18:04 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:13:23.314 14:18:04 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.314 14:18:04 -- common/autotest_common.sh@1194 -- # return 0 00:13:23.314 14:18:04 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:23.314 [global] 00:13:23.314 thread=1 00:13:23.314 invalidate=1 00:13:23.314 rw=write 00:13:23.314 time_based=1 00:13:23.314 runtime=1 00:13:23.314 ioengine=libaio 00:13:23.314 direct=1 00:13:23.314 bs=4096 00:13:23.314 iodepth=1 00:13:23.314 norandommap=0 00:13:23.314 numjobs=1 00:13:23.314 00:13:23.314 verify_dump=1 00:13:23.314 verify_backlog=512 00:13:23.314 verify_state_save=0 00:13:23.314 do_verify=1 00:13:23.314 verify=crc32c-intel 00:13:23.314 [job0] 00:13:23.314 filename=/dev/nvme0n1 00:13:23.314 [job1] 00:13:23.314 filename=/dev/nvme0n2 00:13:23.314 [job2] 00:13:23.314 filename=/dev/nvme0n3 00:13:23.314 [job3] 00:13:23.314 filename=/dev/nvme0n4 00:13:23.314 Could not set queue depth (nvme0n1) 00:13:23.314 Could not set queue depth (nvme0n2) 00:13:23.314 Could not set queue depth (nvme0n3) 00:13:23.314 Could not set queue depth (nvme0n4) 00:13:23.572 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:23.572 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:23.572 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:23.572 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:23.572 fio-3.35 00:13:23.572 Starting 4 threads 00:13:24.947 00:13:24.947 job0: (groupid=0, jobs=1): err= 0: pid=3143736: Fri Apr 26 14:18:06 2024 00:13:24.947 read: IOPS=305, BW=1223KiB/s (1252kB/s)(1272KiB/1040msec) 00:13:24.947 slat (nsec): min=6435, max=23864, avg=7851.46, stdev=2045.86 00:13:24.947 clat (usec): min=246, max=42308, avg=2862.90, stdev=9933.70 00:13:24.947 lat (usec): min=254, max=42319, avg=2870.75, stdev=9934.89 00:13:24.947 clat percentiles (usec): 00:13:24.947 | 1.00th=[ 255], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277], 00:13:24.947 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 297], 00:13:24.947 | 70.00th=[ 306], 80.00th=[ 314], 90.00th=[ 334], 95.00th=[41157], 00:13:24.947 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:24.947 | 99.99th=[42206] 00:13:24.947 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:13:24.947 slat (nsec): min=8512, max=44618, avg=10596.89, stdev=3425.58 00:13:24.947 clat (usec): min=175, max=507, avg=232.28, stdev=34.03 00:13:24.947 lat (usec): min=184, max=547, avg=242.88, stdev=35.34 00:13:24.947 clat percentiles (usec): 00:13:24.947 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 206], 00:13:24.947 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 231], 60.00th=[ 241], 00:13:24.947 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 269], 95.00th=[ 285], 00:13:24.947 | 99.00th=[ 351], 99.50th=[ 388], 99.90th=[ 506], 99.95th=[ 506], 00:13:24.947 | 99.99th=[ 506] 00:13:24.947 bw ( KiB/s): min= 4096, max= 4096, per=21.95%, avg=4096.00, stdev= 0.00, samples=1 00:13:24.947 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:24.947 lat (usec) : 250=47.95%, 500=49.40%, 750=0.24% 00:13:24.947 lat (msec) : 50=2.41% 00:13:24.947 cpu : usr=0.48%, sys=1.15%, ctx=832, majf=0, minf=1 00:13:24.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.947 issued rwts: total=318,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.947 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.947 job1: (groupid=0, jobs=1): err= 0: pid=3143737: Fri Apr 26 14:18:06 2024 00:13:24.947 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:24.947 slat (nsec): min=6216, max=57606, avg=14428.95, stdev=6076.58 00:13:24.947 clat (usec): min=260, max=480, avg=318.90, stdev=26.01 00:13:24.947 lat (usec): min=266, max=506, avg=333.33, stdev=28.51 00:13:24.947 clat percentiles (usec): 00:13:24.947 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 302], 00:13:24.947 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 322], 00:13:24.947 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 351], 95.00th=[ 363], 00:13:24.947 | 99.00th=[ 396], 99.50th=[ 433], 99.90th=[ 469], 99.95th=[ 482], 00:13:24.947 | 99.99th=[ 482] 00:13:24.947 write: IOPS=1934, BW=7736KiB/s (7922kB/s)(7744KiB/1001msec); 0 zone resets 00:13:24.947 slat (nsec): min=7860, max=62683, avg=17252.47, stdev=6359.83 00:13:24.948 clat (usec): min=164, max=3680, avg=226.24, stdev=133.01 00:13:24.948 lat (usec): min=174, max=3709, avg=243.50, stdev=133.67 00:13:24.948 clat percentiles (usec): 00:13:24.948 | 1.00th=[ 174], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 202], 00:13:24.948 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 212], 60.00th=[ 217], 00:13:24.948 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 253], 95.00th=[ 289], 00:13:24.948 | 99.00th=[ 379], 99.50th=[ 478], 99.90th=[ 3228], 99.95th=[ 3687], 00:13:24.948 | 99.99th=[ 3687] 00:13:24.948 bw ( KiB/s): min= 8192, max= 8192, per=43.91%, avg=8192.00, stdev= 0.00, samples=1 00:13:24.948 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:24.948 lat (usec) : 250=49.77%, 500=50.03%, 750=0.09% 00:13:24.948 lat (msec) : 4=0.12% 00:13:24.948 cpu : usr=3.50%, sys=8.20%, ctx=3474, majf=0, minf=1 00:13:24.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.948 issued rwts: total=1536,1936,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.948 job2: (groupid=0, jobs=1): err= 0: pid=3143738: Fri Apr 26 14:18:06 2024 00:13:24.948 read: IOPS=255, BW=1022KiB/s (1046kB/s)(1044KiB/1022msec) 00:13:24.948 slat (nsec): min=5928, max=33110, avg=15990.99, stdev=7286.36 00:13:24.948 clat (usec): min=243, max=41309, avg=3352.45, stdev=10657.43 00:13:24.948 lat (usec): min=259, max=41331, avg=3368.44, stdev=10659.30 00:13:24.948 clat percentiles (usec): 00:13:24.948 | 1.00th=[ 247], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 277], 00:13:24.948 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 306], 00:13:24.948 | 70.00th=[ 322], 80.00th=[ 338], 90.00th=[ 400], 95.00th=[41157], 00:13:24.948 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:24.948 | 99.99th=[41157] 00:13:24.948 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:13:24.948 slat (nsec): min=7453, max=47275, avg=12735.57, stdev=6125.08 00:13:24.948 clat (usec): min=193, max=542, avg=260.24, stdev=42.11 00:13:24.948 lat (usec): min=202, max=570, avg=272.98, stdev=44.04 00:13:24.948 clat percentiles (usec): 00:13:24.948 | 1.00th=[ 200], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 231], 00:13:24.948 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 258], 00:13:24.948 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 314], 95.00th=[ 343], 00:13:24.948 | 99.00th=[ 408], 99.50th=[ 437], 99.90th=[ 545], 99.95th=[ 545], 00:13:24.948 | 99.99th=[ 545] 00:13:24.948 bw ( KiB/s): min= 4096, max= 4096, per=21.95%, avg=4096.00, stdev= 0.00, samples=1 00:13:24.948 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:24.948 lat (usec) : 250=30.66%, 500=66.49%, 750=0.26% 00:13:24.948 lat (msec) : 50=2.59% 00:13:24.948 cpu : usr=0.88%, sys=0.69%, ctx=774, majf=0, minf=1 00:13:24.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.948 issued rwts: total=261,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.948 job3: (groupid=0, jobs=1): err= 0: pid=3143739: Fri Apr 26 14:18:06 2024 00:13:24.948 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:24.948 slat (nsec): min=5129, max=31999, avg=12428.53, stdev=4545.14 00:13:24.948 clat (usec): min=245, max=520, avg=314.54, stdev=50.92 00:13:24.948 lat (usec): min=250, max=536, avg=326.97, stdev=53.45 00:13:24.948 clat percentiles (usec): 00:13:24.948 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277], 00:13:24.948 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 310], 00:13:24.948 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 367], 95.00th=[ 429], 00:13:24.948 | 99.00th=[ 490], 99.50th=[ 498], 99.90th=[ 506], 99.95th=[ 523], 00:13:24.948 | 99.99th=[ 523] 00:13:24.948 write: IOPS=1889, BW=7556KiB/s (7738kB/s)(7564KiB/1001msec); 0 zone resets 00:13:24.948 slat (nsec): min=6686, max=39664, avg=12836.27, stdev=4633.01 00:13:24.948 clat (usec): min=181, max=476, avg=244.12, stdev=32.99 00:13:24.948 lat (usec): min=194, max=483, avg=256.96, stdev=32.62 00:13:24.948 clat percentiles (usec): 00:13:24.948 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 217], 00:13:24.948 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 245], 00:13:24.948 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 310], 00:13:24.948 | 99.00th=[ 351], 99.50th=[ 371], 99.90th=[ 396], 99.95th=[ 478], 00:13:24.948 | 99.99th=[ 478] 00:13:24.948 bw ( KiB/s): min= 8192, max= 8192, per=43.91%, avg=8192.00, stdev= 0.00, samples=1 00:13:24.948 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:24.948 lat (usec) : 250=37.99%, 500=61.83%, 750=0.18% 00:13:24.948 cpu : usr=2.80%, sys=4.20%, ctx=3427, majf=0, minf=1 00:13:24.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.948 issued rwts: total=1536,1891,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.948 00:13:24.948 Run status group 0 (all jobs): 00:13:24.948 READ: bw=13.7MiB/s (14.4MB/s), 1022KiB/s-6138KiB/s (1046kB/s-6285kB/s), io=14.3MiB (15.0MB), run=1001-1040msec 00:13:24.948 WRITE: bw=18.2MiB/s (19.1MB/s), 1969KiB/s-7736KiB/s (2016kB/s-7922kB/s), io=18.9MiB (19.9MB), run=1001-1040msec 00:13:24.948 00:13:24.948 Disk stats (read/write): 00:13:24.948 nvme0n1: ios=201/512, merge=0/0, ticks=1691/117, in_queue=1808, util=98.00% 00:13:24.948 nvme0n2: ios=1465/1536, merge=0/0, ticks=1394/321, in_queue=1715, util=98.27% 00:13:24.948 nvme0n3: ios=72/512, merge=0/0, ticks=1427/132, in_queue=1559, util=98.43% 00:13:24.948 nvme0n4: ios=1397/1536, merge=0/0, ticks=618/356, in_queue=974, util=90.74% 00:13:24.948 14:18:06 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:24.948 [global] 00:13:24.948 thread=1 00:13:24.948 invalidate=1 00:13:24.948 rw=randwrite 00:13:24.948 time_based=1 00:13:24.948 runtime=1 00:13:24.948 ioengine=libaio 00:13:24.948 direct=1 00:13:24.948 bs=4096 00:13:24.948 iodepth=1 00:13:24.948 norandommap=0 00:13:24.948 numjobs=1 00:13:24.948 00:13:24.948 verify_dump=1 00:13:24.948 verify_backlog=512 00:13:24.948 verify_state_save=0 00:13:24.948 do_verify=1 00:13:24.948 verify=crc32c-intel 00:13:24.948 [job0] 00:13:24.948 filename=/dev/nvme0n1 00:13:24.948 [job1] 00:13:24.948 filename=/dev/nvme0n2 00:13:24.948 [job2] 00:13:24.948 filename=/dev/nvme0n3 00:13:24.948 [job3] 00:13:24.948 filename=/dev/nvme0n4 00:13:24.948 Could not set queue depth (nvme0n1) 00:13:24.948 Could not set queue depth (nvme0n2) 00:13:24.948 Could not set queue depth (nvme0n3) 00:13:24.948 Could not set queue depth (nvme0n4) 00:13:24.948 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.948 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.948 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.948 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.948 fio-3.35 00:13:24.948 Starting 4 threads 00:13:26.379 00:13:26.379 job0: (groupid=0, jobs=1): err= 0: pid=3144003: Fri Apr 26 14:18:07 2024 00:13:26.379 read: IOPS=316, BW=1267KiB/s (1297kB/s)(1296KiB/1023msec) 00:13:26.379 slat (nsec): min=7632, max=44715, avg=17300.81, stdev=6519.50 00:13:26.379 clat (usec): min=253, max=42006, avg=2710.65, stdev=9575.88 00:13:26.379 lat (usec): min=261, max=42037, avg=2727.95, stdev=9576.89 00:13:26.379 clat percentiles (usec): 00:13:26.379 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 281], 20.00th=[ 289], 00:13:26.379 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 314], 00:13:26.379 | 70.00th=[ 330], 80.00th=[ 396], 90.00th=[ 457], 95.00th=[40633], 00:13:26.379 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:26.379 | 99.99th=[42206] 00:13:26.379 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:13:26.379 slat (nsec): min=8376, max=61084, avg=19691.18, stdev=5628.96 00:13:26.379 clat (usec): min=193, max=483, avg=240.74, stdev=30.39 00:13:26.379 lat (usec): min=205, max=504, avg=260.43, stdev=30.83 00:13:26.379 clat percentiles (usec): 00:13:26.379 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:13:26.379 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 241], 00:13:26.379 | 70.00th=[ 247], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[ 297], 00:13:26.379 | 99.00th=[ 334], 99.50th=[ 383], 99.90th=[ 482], 99.95th=[ 482], 00:13:26.379 | 99.99th=[ 482] 00:13:26.379 bw ( KiB/s): min= 4096, max= 4096, per=24.33%, avg=4096.00, stdev= 0.00, samples=1 00:13:26.379 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:26.379 lat (usec) : 250=44.98%, 500=52.27%, 750=0.48% 00:13:26.379 lat (msec) : 50=2.27% 00:13:26.379 cpu : usr=1.17%, sys=2.15%, ctx=837, majf=0, minf=1 00:13:26.379 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:26.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.379 issued rwts: total=324,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.379 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:26.379 job1: (groupid=0, jobs=1): err= 0: pid=3144005: Fri Apr 26 14:18:07 2024 00:13:26.379 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:26.379 slat (nsec): min=5976, max=43829, avg=12123.65, stdev=5962.62 00:13:26.379 clat (usec): min=201, max=41964, avg=356.10, stdev=1489.31 00:13:26.379 lat (usec): min=207, max=41977, avg=368.22, stdev=1489.54 00:13:26.379 clat percentiles (usec): 00:13:26.379 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 245], 00:13:26.379 | 30.00th=[ 262], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 314], 00:13:26.379 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 367], 95.00th=[ 433], 00:13:26.379 | 99.00th=[ 498], 99.50th=[ 783], 99.90th=[41157], 99.95th=[42206], 00:13:26.379 | 99.99th=[42206] 00:13:26.379 write: IOPS=1807, BW=7229KiB/s (7402kB/s)(7236KiB/1001msec); 0 zone resets 00:13:26.379 slat (nsec): min=7212, max=55223, avg=16882.10, stdev=6186.22 00:13:26.379 clat (usec): min=157, max=741, avg=215.46, stdev=41.67 00:13:26.379 lat (usec): min=164, max=764, avg=232.34, stdev=44.07 00:13:26.379 clat percentiles (usec): 00:13:26.379 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:13:26.379 | 30.00th=[ 190], 40.00th=[ 200], 50.00th=[ 210], 60.00th=[ 219], 00:13:26.379 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 269], 95.00th=[ 293], 00:13:26.379 | 99.00th=[ 322], 99.50th=[ 371], 99.90th=[ 635], 99.95th=[ 742], 00:13:26.379 | 99.99th=[ 742] 00:13:26.379 bw ( KiB/s): min= 8192, max= 8192, per=48.66%, avg=8192.00, stdev= 0.00, samples=1 00:13:26.379 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:26.379 lat (usec) : 250=56.20%, 500=43.29%, 750=0.27%, 1000=0.12% 00:13:26.379 lat (msec) : 2=0.03%, 4=0.03%, 50=0.06% 00:13:26.379 cpu : usr=3.40%, sys=6.40%, ctx=3346, majf=0, minf=1 00:13:26.379 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:26.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.379 issued rwts: total=1536,1809,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.379 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:26.379 job2: (groupid=0, jobs=1): err= 0: pid=3144006: Fri Apr 26 14:18:07 2024 00:13:26.379 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:13:26.379 slat (nsec): min=15862, max=38516, avg=25832.77, stdev=9304.71 00:13:26.379 clat (usec): min=463, max=41432, avg=39128.69, stdev=8637.09 00:13:26.379 lat (usec): min=502, max=41470, avg=39154.53, stdev=8634.29 00:13:26.379 clat percentiles (usec): 00:13:26.379 | 1.00th=[ 465], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:13:26.379 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:26.379 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:26.379 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:13:26.379 | 99.99th=[41681] 00:13:26.379 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:13:26.379 slat (nsec): min=9026, max=55378, avg=21337.25, stdev=6388.99 00:13:26.379 clat (usec): min=201, max=477, avg=258.67, stdev=30.89 00:13:26.379 lat (usec): min=221, max=507, avg=280.00, stdev=33.21 00:13:26.379 clat percentiles (usec): 00:13:26.379 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 237], 00:13:26.379 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 260], 00:13:26.379 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 326], 00:13:26.379 | 99.00th=[ 371], 99.50th=[ 412], 99.90th=[ 478], 99.95th=[ 478], 00:13:26.379 | 99.99th=[ 478] 00:13:26.379 bw ( KiB/s): min= 4096, max= 4096, per=24.33%, avg=4096.00, stdev= 0.00, samples=1 00:13:26.379 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:26.379 lat (usec) : 250=44.01%, 500=52.06% 00:13:26.379 lat (msec) : 50=3.93% 00:13:26.379 cpu : usr=0.60%, sys=1.59%, ctx=536, majf=0, minf=1 00:13:26.379 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:26.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.379 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.379 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:26.380 job3: (groupid=0, jobs=1): err= 0: pid=3144007: Fri Apr 26 14:18:07 2024 00:13:26.380 read: IOPS=1117, BW=4470KiB/s (4577kB/s)(4640KiB/1038msec) 00:13:26.380 slat (nsec): min=6384, max=52563, avg=13720.44, stdev=6282.17 00:13:26.380 clat (usec): min=262, max=41083, avg=517.05, stdev=2660.86 00:13:26.380 lat (usec): min=271, max=41098, avg=530.77, stdev=2660.98 00:13:26.380 clat percentiles (usec): 00:13:26.380 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 297], 00:13:26.380 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 330], 00:13:26.380 | 70.00th=[ 347], 80.00th=[ 371], 90.00th=[ 437], 95.00th=[ 486], 00:13:26.380 | 99.00th=[ 562], 99.50th=[ 3097], 99.90th=[41157], 99.95th=[41157], 00:13:26.380 | 99.99th=[41157] 00:13:26.380 write: IOPS=1479, BW=5919KiB/s (6061kB/s)(6144KiB/1038msec); 0 zone resets 00:13:26.380 slat (nsec): min=8507, max=60821, avg=19211.12, stdev=6385.82 00:13:26.380 clat (usec): min=183, max=546, avg=246.52, stdev=36.75 00:13:26.380 lat (usec): min=193, max=568, avg=265.73, stdev=37.86 00:13:26.380 clat percentiles (usec): 00:13:26.380 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:13:26.380 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 241], 60.00th=[ 247], 00:13:26.380 | 70.00th=[ 258], 80.00th=[ 273], 90.00th=[ 306], 95.00th=[ 318], 00:13:26.380 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 490], 99.95th=[ 545], 00:13:26.380 | 99.99th=[ 545] 00:13:26.380 bw ( KiB/s): min= 4320, max= 7968, per=36.49%, avg=6144.00, stdev=2579.53, samples=2 00:13:26.380 iops : min= 1080, max= 1992, avg=1536.00, stdev=644.88, samples=2 00:13:26.380 lat (usec) : 250=36.50%, 500=62.05%, 750=1.22% 00:13:26.380 lat (msec) : 4=0.04%, 50=0.19% 00:13:26.380 cpu : usr=3.57%, sys=5.79%, ctx=2697, majf=0, minf=1 00:13:26.380 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:26.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.380 issued rwts: total=1160,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.380 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:26.380 00:13:26.380 Run status group 0 (all jobs): 00:13:26.380 READ: bw=11.4MiB/s (12.0MB/s), 87.2KiB/s-6138KiB/s (89.3kB/s-6285kB/s), io=11.9MiB (12.5MB), run=1001-1038msec 00:13:26.380 WRITE: bw=16.4MiB/s (17.2MB/s), 2002KiB/s-7229KiB/s (2050kB/s-7402kB/s), io=17.1MiB (17.9MB), run=1001-1038msec 00:13:26.380 00:13:26.380 Disk stats (read/write): 00:13:26.380 nvme0n1: ios=359/512, merge=0/0, ticks=779/110, in_queue=889, util=91.18% 00:13:26.380 nvme0n2: ios=1215/1536, merge=0/0, ticks=1136/316, in_queue=1452, util=94.21% 00:13:26.380 nvme0n3: ios=60/512, merge=0/0, ticks=864/124, in_queue=988, util=100.00% 00:13:26.380 nvme0n4: ios=1156/1536, merge=0/0, ticks=1437/345, in_queue=1782, util=98.21% 00:13:26.380 14:18:07 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:26.380 [global] 00:13:26.380 thread=1 00:13:26.380 invalidate=1 00:13:26.380 rw=write 00:13:26.380 time_based=1 00:13:26.380 runtime=1 00:13:26.380 ioengine=libaio 00:13:26.380 direct=1 00:13:26.380 bs=4096 00:13:26.380 iodepth=128 00:13:26.380 norandommap=0 00:13:26.380 numjobs=1 00:13:26.380 00:13:26.380 verify_dump=1 00:13:26.380 verify_backlog=512 00:13:26.380 verify_state_save=0 00:13:26.380 do_verify=1 00:13:26.380 verify=crc32c-intel 00:13:26.380 [job0] 00:13:26.380 filename=/dev/nvme0n1 00:13:26.380 [job1] 00:13:26.380 filename=/dev/nvme0n2 00:13:26.380 [job2] 00:13:26.380 filename=/dev/nvme0n3 00:13:26.380 [job3] 00:13:26.380 filename=/dev/nvme0n4 00:13:26.380 Could not set queue depth (nvme0n1) 00:13:26.380 Could not set queue depth (nvme0n2) 00:13:26.380 Could not set queue depth (nvme0n3) 00:13:26.380 Could not set queue depth (nvme0n4) 00:13:26.380 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:26.380 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:26.380 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:26.380 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:26.380 fio-3.35 00:13:26.380 Starting 4 threads 00:13:27.756 00:13:27.756 job0: (groupid=0, jobs=1): err= 0: pid=3144194: Fri Apr 26 14:18:09 2024 00:13:27.756 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:13:27.756 slat (usec): min=3, max=30649, avg=163.24, stdev=1223.71 00:13:27.756 clat (usec): min=6090, max=57617, avg=20876.95, stdev=7510.14 00:13:27.756 lat (usec): min=6097, max=57634, avg=21040.19, stdev=7578.57 00:13:27.756 clat percentiles (usec): 00:13:27.756 | 1.00th=[ 6128], 5.00th=[12387], 10.00th=[14091], 20.00th=[15533], 00:13:27.756 | 30.00th=[16057], 40.00th=[17433], 50.00th=[18744], 60.00th=[21103], 00:13:27.756 | 70.00th=[22676], 80.00th=[24773], 90.00th=[32900], 95.00th=[37487], 00:13:27.756 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[53216], 00:13:27.756 | 99.99th=[57410] 00:13:27.756 write: IOPS=3416, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1002msec); 0 zone resets 00:13:27.756 slat (usec): min=4, max=24260, avg=138.62, stdev=889.92 00:13:27.756 clat (usec): min=778, max=62199, avg=18274.64, stdev=8002.16 00:13:27.756 lat (usec): min=6267, max=62235, avg=18413.26, stdev=8079.21 00:13:27.756 clat percentiles (usec): 00:13:27.756 | 1.00th=[ 6980], 5.00th=[10421], 10.00th=[11338], 20.00th=[12387], 00:13:27.756 | 30.00th=[13435], 40.00th=[15008], 50.00th=[16057], 60.00th=[16712], 00:13:27.756 | 70.00th=[17957], 80.00th=[23725], 90.00th=[31851], 95.00th=[33817], 00:13:27.756 | 99.00th=[46400], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:13:27.756 | 99.99th=[62129] 00:13:27.756 bw ( KiB/s): min=12288, max=14080, per=22.74%, avg=13184.00, stdev=1267.14, samples=2 00:13:27.756 iops : min= 3072, max= 3520, avg=3296.00, stdev=316.78, samples=2 00:13:27.756 lat (usec) : 1000=0.02% 00:13:27.756 lat (msec) : 10=2.43%, 20=62.29%, 50=35.21%, 100=0.05% 00:13:27.756 cpu : usr=3.90%, sys=4.30%, ctx=235, majf=0, minf=13 00:13:27.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:13:27.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:27.756 issued rwts: total=3072,3423,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:27.756 job1: (groupid=0, jobs=1): err= 0: pid=3144195: Fri Apr 26 14:18:09 2024 00:13:27.756 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:13:27.756 slat (usec): min=2, max=15389, avg=120.84, stdev=788.62 00:13:27.756 clat (usec): min=3276, max=54012, avg=15596.52, stdev=8107.64 00:13:27.756 lat (usec): min=3308, max=54018, avg=15717.36, stdev=8166.13 00:13:27.756 clat percentiles (usec): 00:13:27.756 | 1.00th=[ 6194], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11731], 00:13:27.756 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12911], 00:13:27.756 | 70.00th=[13698], 80.00th=[17957], 90.00th=[25560], 95.00th=[37487], 00:13:27.756 | 99.00th=[46924], 99.50th=[49021], 99.90th=[50070], 99.95th=[53740], 00:13:27.756 | 99.99th=[54264] 00:13:27.756 write: IOPS=4089, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1002msec); 0 zone resets 00:13:27.756 slat (usec): min=4, max=23019, avg=114.45, stdev=780.45 00:13:27.756 clat (usec): min=755, max=49165, avg=15244.05, stdev=8127.01 00:13:27.756 lat (usec): min=3002, max=49174, avg=15358.49, stdev=8187.06 00:13:27.756 clat percentiles (usec): 00:13:27.757 | 1.00th=[ 6456], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10421], 00:13:27.757 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:13:27.757 | 70.00th=[12780], 80.00th=[17957], 90.00th=[28443], 95.00th=[34866], 00:13:27.757 | 99.00th=[40109], 99.50th=[44303], 99.90th=[49021], 99.95th=[49021], 00:13:27.757 | 99.99th=[49021] 00:13:27.757 bw ( KiB/s): min=15536, max=17232, per=28.26%, avg=16384.00, stdev=1199.25, samples=2 00:13:27.757 iops : min= 3884, max= 4308, avg=4096.00, stdev=299.81, samples=2 00:13:27.757 lat (usec) : 1000=0.01% 00:13:27.757 lat (msec) : 4=0.22%, 10=9.70%, 20=72.82%, 50=17.20%, 100=0.05% 00:13:27.757 cpu : usr=4.80%, sys=5.79%, ctx=353, majf=0, minf=15 00:13:27.757 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:27.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:27.757 issued rwts: total=4096,4098,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.757 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:27.757 job2: (groupid=0, jobs=1): err= 0: pid=3144196: Fri Apr 26 14:18:09 2024 00:13:27.757 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:13:27.757 slat (usec): min=3, max=15981, avg=123.66, stdev=764.84 00:13:27.757 clat (usec): min=9574, max=36775, avg=17196.10, stdev=4731.72 00:13:27.757 lat (usec): min=9582, max=36788, avg=17319.76, stdev=4781.52 00:13:27.757 clat percentiles (usec): 00:13:27.757 | 1.00th=[11338], 5.00th=[13173], 10.00th=[13566], 20.00th=[13960], 00:13:27.757 | 30.00th=[14222], 40.00th=[14615], 50.00th=[15139], 60.00th=[16450], 00:13:27.757 | 70.00th=[18744], 80.00th=[19792], 90.00th=[23987], 95.00th=[28705], 00:13:27.757 | 99.00th=[32113], 99.50th=[32113], 99.90th=[32113], 99.95th=[35390], 00:13:27.757 | 99.99th=[36963] 00:13:27.757 write: IOPS=3939, BW=15.4MiB/s (16.1MB/s)(15.4MiB/1002msec); 0 zone resets 00:13:27.757 slat (usec): min=4, max=18386, avg=124.80, stdev=720.71 00:13:27.757 clat (usec): min=647, max=50982, avg=16545.54, stdev=5570.56 00:13:27.757 lat (usec): min=4700, max=50987, avg=16670.34, stdev=5622.45 00:13:27.757 clat percentiles (usec): 00:13:27.757 | 1.00th=[ 5145], 5.00th=[10552], 10.00th=[13173], 20.00th=[13566], 00:13:27.757 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14484], 60.00th=[14877], 00:13:27.757 | 70.00th=[15926], 80.00th=[18482], 90.00th=[26346], 95.00th=[29230], 00:13:27.757 | 99.00th=[33424], 99.50th=[34341], 99.90th=[39060], 99.95th=[43779], 00:13:27.757 | 99.99th=[51119] 00:13:27.757 bw ( KiB/s): min=14688, max=15864, per=26.34%, avg=15276.00, stdev=831.56, samples=2 00:13:27.757 iops : min= 3672, max= 3966, avg=3819.00, stdev=207.89, samples=2 00:13:27.757 lat (usec) : 750=0.01% 00:13:27.757 lat (msec) : 10=1.97%, 20=78.93%, 50=19.08%, 100=0.01% 00:13:27.757 cpu : usr=3.60%, sys=6.89%, ctx=382, majf=0, minf=9 00:13:27.757 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:27.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:27.757 issued rwts: total=3584,3947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.757 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:27.757 job3: (groupid=0, jobs=1): err= 0: pid=3144197: Fri Apr 26 14:18:09 2024 00:13:27.757 read: IOPS=2992, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1003msec) 00:13:27.757 slat (usec): min=4, max=16324, avg=169.06, stdev=1075.80 00:13:27.757 clat (usec): min=543, max=63859, avg=20603.22, stdev=10919.13 00:13:27.757 lat (usec): min=3060, max=63878, avg=20772.28, stdev=11020.60 00:13:27.757 clat percentiles (usec): 00:13:27.757 | 1.00th=[ 3359], 5.00th=[ 9896], 10.00th=[11338], 20.00th=[13960], 00:13:27.757 | 30.00th=[14353], 40.00th=[14615], 50.00th=[15139], 60.00th=[17695], 00:13:27.757 | 70.00th=[22152], 80.00th=[28705], 90.00th=[36963], 95.00th=[45876], 00:13:27.757 | 99.00th=[52167], 99.50th=[52167], 99.90th=[54789], 99.95th=[61604], 00:13:27.757 | 99.99th=[63701] 00:13:27.757 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:13:27.757 slat (usec): min=5, max=15374, avg=152.24, stdev=832.71 00:13:27.757 clat (usec): min=5023, max=57661, avg=21079.47, stdev=10461.45 00:13:27.757 lat (usec): min=5038, max=57682, avg=21231.71, stdev=10516.03 00:13:27.757 clat percentiles (usec): 00:13:27.757 | 1.00th=[ 7373], 5.00th=[10159], 10.00th=[11863], 20.00th=[13566], 00:13:27.757 | 30.00th=[14091], 40.00th=[14484], 50.00th=[18482], 60.00th=[20579], 00:13:27.757 | 70.00th=[24511], 80.00th=[27132], 90.00th=[35914], 95.00th=[45876], 00:13:27.757 | 99.00th=[54264], 99.50th=[55313], 99.90th=[57410], 99.95th=[57410], 00:13:27.757 | 99.99th=[57410] 00:13:27.757 bw ( KiB/s): min= 9208, max=15368, per=21.19%, avg=12288.00, stdev=4355.78, samples=2 00:13:27.757 iops : min= 2302, max= 3842, avg=3072.00, stdev=1088.94, samples=2 00:13:27.757 lat (usec) : 750=0.02% 00:13:27.757 lat (msec) : 4=0.51%, 10=4.15%, 20=57.12%, 50=35.35%, 100=2.85% 00:13:27.757 cpu : usr=3.69%, sys=5.39%, ctx=343, majf=0, minf=13 00:13:27.757 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:27.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.757 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:27.757 issued rwts: total=3001,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.757 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:27.757 00:13:27.757 Run status group 0 (all jobs): 00:13:27.757 READ: bw=53.6MiB/s (56.2MB/s), 11.7MiB/s-16.0MiB/s (12.3MB/s-16.7MB/s), io=53.7MiB (56.3MB), run=1002-1003msec 00:13:27.757 WRITE: bw=56.6MiB/s (59.4MB/s), 12.0MiB/s-16.0MiB/s (12.5MB/s-16.8MB/s), io=56.8MiB (59.6MB), run=1002-1003msec 00:13:27.757 00:13:27.757 Disk stats (read/write): 00:13:27.757 nvme0n1: ios=2598/2873, merge=0/0, ticks=28791/25788, in_queue=54579, util=98.90% 00:13:27.757 nvme0n2: ios=3589/3728, merge=0/0, ticks=18304/18189, in_queue=36493, util=85.28% 00:13:27.757 nvme0n3: ios=3130/3163, merge=0/0, ticks=25864/23773, in_queue=49637, util=97.08% 00:13:27.757 nvme0n4: ios=2354/2560, merge=0/0, ticks=20227/23797, in_queue=44024, util=98.53% 00:13:27.757 14:18:09 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:27.757 [global] 00:13:27.757 thread=1 00:13:27.757 invalidate=1 00:13:27.757 rw=randwrite 00:13:27.757 time_based=1 00:13:27.757 runtime=1 00:13:27.757 ioengine=libaio 00:13:27.757 direct=1 00:13:27.757 bs=4096 00:13:27.757 iodepth=128 00:13:27.757 norandommap=0 00:13:27.757 numjobs=1 00:13:27.757 00:13:27.757 verify_dump=1 00:13:27.757 verify_backlog=512 00:13:27.757 verify_state_save=0 00:13:27.757 do_verify=1 00:13:27.757 verify=crc32c-intel 00:13:27.757 [job0] 00:13:27.757 filename=/dev/nvme0n1 00:13:27.757 [job1] 00:13:27.757 filename=/dev/nvme0n2 00:13:27.757 [job2] 00:13:27.757 filename=/dev/nvme0n3 00:13:27.757 [job3] 00:13:27.757 filename=/dev/nvme0n4 00:13:27.757 Could not set queue depth (nvme0n1) 00:13:27.757 Could not set queue depth (nvme0n2) 00:13:27.757 Could not set queue depth (nvme0n3) 00:13:27.757 Could not set queue depth (nvme0n4) 00:13:27.757 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.757 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.757 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.757 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:27.757 fio-3.35 00:13:27.757 Starting 4 threads 00:13:29.147 00:13:29.147 job0: (groupid=0, jobs=1): err= 0: pid=3144771: Fri Apr 26 14:18:10 2024 00:13:29.147 read: IOPS=3574, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:13:29.147 slat (usec): min=2, max=8323, avg=113.74, stdev=612.42 00:13:29.147 clat (usec): min=3718, max=45451, avg=15688.43, stdev=5658.01 00:13:29.147 lat (usec): min=4536, max=45460, avg=15802.17, stdev=5656.74 00:13:29.147 clat percentiles (usec): 00:13:29.147 | 1.00th=[ 8848], 5.00th=[10945], 10.00th=[11600], 20.00th=[12518], 00:13:29.147 | 30.00th=[13304], 40.00th=[13566], 50.00th=[14222], 60.00th=[15008], 00:13:29.147 | 70.00th=[16057], 80.00th=[17171], 90.00th=[20317], 95.00th=[23725], 00:13:29.147 | 99.00th=[39584], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:13:29.147 | 99.99th=[45351] 00:13:29.147 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:13:29.147 slat (usec): min=4, max=17888, avg=136.45, stdev=767.36 00:13:29.147 clat (usec): min=4618, max=40825, avg=17315.43, stdev=7322.16 00:13:29.147 lat (usec): min=4965, max=45450, avg=17451.89, stdev=7365.92 00:13:29.147 clat percentiles (usec): 00:13:29.147 | 1.00th=[ 7701], 5.00th=[10028], 10.00th=[11207], 20.00th=[12256], 00:13:29.148 | 30.00th=[12649], 40.00th=[13698], 50.00th=[15008], 60.00th=[16450], 00:13:29.148 | 70.00th=[17957], 80.00th=[21627], 90.00th=[29754], 95.00th=[35390], 00:13:29.148 | 99.00th=[39584], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:13:29.148 | 99.99th=[40633] 00:13:29.148 bw ( KiB/s): min=14736, max=17106, per=26.26%, avg=15921.00, stdev=1675.84, samples=2 00:13:29.148 iops : min= 3684, max= 4276, avg=3980.00, stdev=418.61, samples=2 00:13:29.148 lat (msec) : 4=0.01%, 10=4.23%, 20=77.35%, 50=18.41% 00:13:29.148 cpu : usr=2.79%, sys=5.48%, ctx=432, majf=0, minf=1 00:13:29.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:29.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:29.148 issued rwts: total=3592,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:29.148 job1: (groupid=0, jobs=1): err= 0: pid=3144772: Fri Apr 26 14:18:10 2024 00:13:29.148 read: IOPS=3089, BW=12.1MiB/s (12.7MB/s)(12.1MiB/1005msec) 00:13:29.148 slat (usec): min=2, max=18768, avg=138.76, stdev=863.99 00:13:29.148 clat (usec): min=1664, max=57855, avg=18256.46, stdev=9095.60 00:13:29.148 lat (usec): min=4464, max=57871, avg=18395.22, stdev=9162.55 00:13:29.148 clat percentiles (usec): 00:13:29.148 | 1.00th=[ 8586], 5.00th=[10683], 10.00th=[11994], 20.00th=[12387], 00:13:29.148 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13566], 60.00th=[15008], 00:13:29.148 | 70.00th=[19530], 80.00th=[24511], 90.00th=[32113], 95.00th=[39060], 00:13:29.148 | 99.00th=[49546], 99.50th=[49546], 99.90th=[50070], 99.95th=[52167], 00:13:29.148 | 99.99th=[57934] 00:13:29.148 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:13:29.148 slat (usec): min=4, max=18030, avg=152.17, stdev=916.80 00:13:29.148 clat (usec): min=8217, max=62715, avg=19427.46, stdev=10466.17 00:13:29.148 lat (usec): min=8232, max=62736, avg=19579.63, stdev=10538.80 00:13:29.148 clat percentiles (usec): 00:13:29.148 | 1.00th=[ 8979], 5.00th=[11469], 10.00th=[11863], 20.00th=[12256], 00:13:29.148 | 30.00th=[12387], 40.00th=[12780], 50.00th=[14484], 60.00th=[17695], 00:13:29.148 | 70.00th=[21365], 80.00th=[26608], 90.00th=[33162], 95.00th=[41157], 00:13:29.148 | 99.00th=[58983], 99.50th=[62129], 99.90th=[62653], 99.95th=[62653], 00:13:29.148 | 99.99th=[62653] 00:13:29.148 bw ( KiB/s): min=12312, max=15624, per=23.04%, avg=13968.00, stdev=2341.94, samples=2 00:13:29.148 iops : min= 3078, max= 3906, avg=3492.00, stdev=585.48, samples=2 00:13:29.148 lat (msec) : 2=0.01%, 10=1.76%, 20=68.81%, 50=27.87%, 100=1.54% 00:13:29.148 cpu : usr=3.78%, sys=4.88%, ctx=278, majf=0, minf=1 00:13:29.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:29.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:29.148 issued rwts: total=3105,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:29.148 job2: (groupid=0, jobs=1): err= 0: pid=3144773: Fri Apr 26 14:18:10 2024 00:13:29.148 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:13:29.148 slat (usec): min=3, max=11469, avg=135.80, stdev=715.84 00:13:29.148 clat (usec): min=7276, max=36038, avg=17485.03, stdev=5489.52 00:13:29.148 lat (usec): min=7280, max=36243, avg=17620.83, stdev=5498.94 00:13:29.148 clat percentiles (usec): 00:13:29.148 | 1.00th=[ 9634], 5.00th=[12125], 10.00th=[12649], 20.00th=[13698], 00:13:29.148 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15401], 60.00th=[15926], 00:13:29.148 | 70.00th=[17957], 80.00th=[21890], 90.00th=[26346], 95.00th=[29230], 00:13:29.148 | 99.00th=[32900], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:13:29.148 | 99.99th=[35914] 00:13:29.148 write: IOPS=3882, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1002msec); 0 zone resets 00:13:29.148 slat (usec): min=4, max=11641, avg=124.60, stdev=712.41 00:13:29.148 clat (usec): min=417, max=33027, avg=16375.03, stdev=5426.13 00:13:29.148 lat (usec): min=3361, max=33037, avg=16499.63, stdev=5448.76 00:13:29.148 clat percentiles (usec): 00:13:29.148 | 1.00th=[ 6652], 5.00th=[10159], 10.00th=[11338], 20.00th=[12518], 00:13:29.148 | 30.00th=[13173], 40.00th=[13829], 50.00th=[15008], 60.00th=[15926], 00:13:29.148 | 70.00th=[17433], 80.00th=[19268], 90.00th=[24773], 95.00th=[29230], 00:13:29.148 | 99.00th=[32113], 99.50th=[32900], 99.90th=[32900], 99.95th=[32900], 00:13:29.148 | 99.99th=[32900] 00:13:29.148 bw ( KiB/s): min=13376, max=16720, per=24.82%, avg=15048.00, stdev=2364.57, samples=2 00:13:29.148 iops : min= 3344, max= 4180, avg=3762.00, stdev=591.14, samples=2 00:13:29.148 lat (usec) : 500=0.01% 00:13:29.148 lat (msec) : 4=0.43%, 10=2.53%, 20=76.64%, 50=20.39% 00:13:29.148 cpu : usr=3.90%, sys=5.99%, ctx=365, majf=0, minf=1 00:13:29.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:29.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:29.148 issued rwts: total=3584,3890,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:29.148 job3: (groupid=0, jobs=1): err= 0: pid=3144774: Fri Apr 26 14:18:10 2024 00:13:29.148 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:13:29.148 slat (usec): min=2, max=14701, avg=134.36, stdev=799.33 00:13:29.148 clat (usec): min=6748, max=40572, avg=18032.64, stdev=5533.56 00:13:29.148 lat (usec): min=6753, max=40577, avg=18167.00, stdev=5551.21 00:13:29.148 clat percentiles (usec): 00:13:29.148 | 1.00th=[ 9503], 5.00th=[11338], 10.00th=[12649], 20.00th=[14222], 00:13:29.148 | 30.00th=[14746], 40.00th=[15139], 50.00th=[16057], 60.00th=[18220], 00:13:29.148 | 70.00th=[19530], 80.00th=[21890], 90.00th=[26084], 95.00th=[29492], 00:13:29.148 | 99.00th=[33817], 99.50th=[34866], 99.90th=[40633], 99.95th=[40633], 00:13:29.148 | 99.99th=[40633] 00:13:29.148 write: IOPS=3649, BW=14.3MiB/s (14.9MB/s)(14.3MiB/1003msec); 0 zone resets 00:13:29.148 slat (usec): min=4, max=9215, avg=128.55, stdev=751.92 00:13:29.148 clat (usec): min=356, max=42550, avg=16956.41, stdev=5052.27 00:13:29.148 lat (usec): min=4081, max=42567, avg=17084.96, stdev=5091.17 00:13:29.148 clat percentiles (usec): 00:13:29.148 | 1.00th=[ 4555], 5.00th=[11338], 10.00th=[12518], 20.00th=[13566], 00:13:29.148 | 30.00th=[13960], 40.00th=[14877], 50.00th=[16057], 60.00th=[16909], 00:13:29.148 | 70.00th=[17695], 80.00th=[19268], 90.00th=[24773], 95.00th=[27657], 00:13:29.148 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[42206], 00:13:29.148 | 99.99th=[42730] 00:13:29.148 bw ( KiB/s): min=12288, max=16416, per=23.68%, avg=14352.00, stdev=2918.94, samples=2 00:13:29.148 iops : min= 3072, max= 4104, avg=3588.00, stdev=729.73, samples=2 00:13:29.148 lat (usec) : 500=0.01% 00:13:29.148 lat (msec) : 10=2.33%, 20=73.85%, 50=23.80% 00:13:29.148 cpu : usr=2.89%, sys=4.59%, ctx=308, majf=0, minf=1 00:13:29.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:13:29.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:29.148 issued rwts: total=3584,3660,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:29.148 00:13:29.148 Run status group 0 (all jobs): 00:13:29.148 READ: bw=53.9MiB/s (56.5MB/s), 12.1MiB/s-14.0MiB/s (12.7MB/s-14.7MB/s), io=54.2MiB (56.8MB), run=1002-1005msec 00:13:29.148 WRITE: bw=59.2MiB/s (62.1MB/s), 13.9MiB/s-15.9MiB/s (14.6MB/s-16.7MB/s), io=59.5MiB (62.4MB), run=1002-1005msec 00:13:29.148 00:13:29.148 Disk stats (read/write): 00:13:29.148 nvme0n1: ios=3271/3584, merge=0/0, ticks=18784/18178, in_queue=36962, util=95.89% 00:13:29.148 nvme0n2: ios=2714/3072, merge=0/0, ticks=15016/20527, in_queue=35543, util=96.24% 00:13:29.148 nvme0n3: ios=3029/3072, merge=0/0, ticks=16267/15318, in_queue=31585, util=95.93% 00:13:29.148 nvme0n4: ios=2953/3072, merge=0/0, ticks=22924/14245, in_queue=37169, util=99.79% 00:13:29.148 14:18:10 -- target/fio.sh@55 -- # sync 00:13:29.148 14:18:10 -- target/fio.sh@59 -- # fio_pid=3144998 00:13:29.148 14:18:10 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:29.148 14:18:10 -- target/fio.sh@61 -- # sleep 3 00:13:29.148 [global] 00:13:29.148 thread=1 00:13:29.148 invalidate=1 00:13:29.148 rw=read 00:13:29.148 time_based=1 00:13:29.148 runtime=10 00:13:29.148 ioengine=libaio 00:13:29.148 direct=1 00:13:29.148 bs=4096 00:13:29.148 iodepth=1 00:13:29.148 norandommap=1 00:13:29.148 numjobs=1 00:13:29.148 00:13:29.148 [job0] 00:13:29.148 filename=/dev/nvme0n1 00:13:29.148 [job1] 00:13:29.148 filename=/dev/nvme0n2 00:13:29.148 [job2] 00:13:29.148 filename=/dev/nvme0n3 00:13:29.148 [job3] 00:13:29.148 filename=/dev/nvme0n4 00:13:29.148 Could not set queue depth (nvme0n1) 00:13:29.148 Could not set queue depth (nvme0n2) 00:13:29.148 Could not set queue depth (nvme0n3) 00:13:29.148 Could not set queue depth (nvme0n4) 00:13:29.148 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.148 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.148 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.148 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:29.148 fio-3.35 00:13:29.148 Starting 4 threads 00:13:32.425 14:18:13 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:32.425 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=28909568, buflen=4096 00:13:32.425 fio: pid=3145162, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:32.425 14:18:13 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:32.683 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=720896, buflen=4096 00:13:32.683 fio: pid=3145151, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:32.683 14:18:14 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:32.683 14:18:14 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:32.941 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=28246016, buflen=4096 00:13:32.941 fio: pid=3145104, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:32.941 14:18:14 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:32.941 14:18:14 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:33.199 14:18:14 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:33.199 14:18:14 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:33.199 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=35061760, buflen=4096 00:13:33.199 fio: pid=3145121, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:33.199 00:13:33.199 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3145104: Fri Apr 26 14:18:14 2024 00:13:33.199 read: IOPS=1963, BW=7854KiB/s (8043kB/s)(26.9MiB/3512msec) 00:13:33.199 slat (usec): min=5, max=13718, avg=15.29, stdev=201.96 00:13:33.199 clat (usec): min=243, max=41964, avg=490.04, stdev=2398.09 00:13:33.199 lat (usec): min=249, max=41979, avg=503.92, stdev=2404.16 00:13:33.199 clat percentiles (usec): 00:13:33.199 | 1.00th=[ 265], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 306], 00:13:33.199 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 343], 00:13:33.199 | 70.00th=[ 351], 80.00th=[ 371], 90.00th=[ 400], 95.00th=[ 433], 00:13:33.199 | 99.00th=[ 619], 99.50th=[ 971], 99.90th=[41157], 99.95th=[41681], 00:13:33.199 | 99.99th=[42206] 00:13:33.199 bw ( KiB/s): min= 5024, max=11080, per=38.63%, avg=9152.00, stdev=2150.46, samples=6 00:13:33.199 iops : min= 1256, max= 2770, avg=2288.00, stdev=537.62, samples=6 00:13:33.199 lat (usec) : 250=0.23%, 500=98.32%, 750=0.65%, 1000=0.29% 00:13:33.199 lat (msec) : 2=0.07%, 4=0.03%, 20=0.03%, 50=0.36% 00:13:33.199 cpu : usr=1.71%, sys=3.67%, ctx=6902, majf=0, minf=1 00:13:33.199 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:33.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.199 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.199 issued rwts: total=6897,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.199 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:33.199 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3145121: Fri Apr 26 14:18:14 2024 00:13:33.199 read: IOPS=2234, BW=8938KiB/s (9152kB/s)(33.4MiB/3831msec) 00:13:33.199 slat (usec): min=5, max=8790, avg=14.10, stdev=133.07 00:13:33.199 clat (usec): min=222, max=42048, avg=430.13, stdev=2122.92 00:13:33.199 lat (usec): min=230, max=42065, avg=444.23, stdev=2127.41 00:13:33.200 clat percentiles (usec): 00:13:33.200 | 1.00th=[ 251], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 285], 00:13:33.200 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 322], 00:13:33.200 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 371], 95.00th=[ 416], 00:13:33.200 | 99.00th=[ 498], 99.50th=[ 660], 99.90th=[42206], 99.95th=[42206], 00:13:33.200 | 99.99th=[42206] 00:13:33.200 bw ( KiB/s): min= 4040, max=12304, per=40.78%, avg=9662.86, stdev=3606.46, samples=7 00:13:33.200 iops : min= 1010, max= 3076, avg=2415.71, stdev=901.62, samples=7 00:13:33.200 lat (usec) : 250=0.95%, 500=98.19%, 750=0.43%, 1000=0.09% 00:13:33.200 lat (msec) : 2=0.06%, 50=0.27% 00:13:33.200 cpu : usr=1.64%, sys=4.62%, ctx=8566, majf=0, minf=1 00:13:33.200 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:33.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.200 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.200 issued rwts: total=8561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.200 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:33.200 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3145151: Fri Apr 26 14:18:14 2024 00:13:33.200 read: IOPS=55, BW=220KiB/s (225kB/s)(704KiB/3201msec) 00:13:33.200 slat (usec): min=6, max=10894, avg=85.23, stdev=817.15 00:13:33.200 clat (usec): min=326, max=42366, avg=18093.05, stdev=20278.23 00:13:33.200 lat (usec): min=361, max=42396, avg=18116.86, stdev=20277.67 00:13:33.200 clat percentiles (usec): 00:13:33.200 | 1.00th=[ 351], 5.00th=[ 367], 10.00th=[ 379], 20.00th=[ 449], 00:13:33.200 | 30.00th=[ 465], 40.00th=[ 486], 50.00th=[ 529], 60.00th=[40633], 00:13:33.200 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:13:33.200 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:33.200 | 99.99th=[42206] 00:13:33.200 bw ( KiB/s): min= 96, max= 344, per=0.95%, avg=226.67, stdev=104.02, samples=6 00:13:33.200 iops : min= 24, max= 86, avg=56.67, stdev=26.01, samples=6 00:13:33.200 lat (usec) : 500=46.89%, 750=8.47%, 1000=1.13% 00:13:33.200 lat (msec) : 50=42.94% 00:13:33.200 cpu : usr=0.25%, sys=0.00%, ctx=178, majf=0, minf=1 00:13:33.200 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:33.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.200 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.200 issued rwts: total=177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.200 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:33.200 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3145162: Fri Apr 26 14:18:14 2024 00:13:33.200 read: IOPS=2432, BW=9728KiB/s (9962kB/s)(27.6MiB/2902msec) 00:13:33.200 slat (nsec): min=6151, max=57699, avg=12150.01, stdev=5662.40 00:13:33.200 clat (usec): min=232, max=42383, avg=395.94, stdev=1902.67 00:13:33.200 lat (usec): min=239, max=42398, avg=408.09, stdev=1903.07 00:13:33.200 clat percentiles (usec): 00:13:33.200 | 1.00th=[ 249], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 269], 00:13:33.200 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 310], 00:13:33.200 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 400], 00:13:33.200 | 99.00th=[ 498], 99.50th=[ 523], 99.90th=[41681], 99.95th=[42206], 00:13:33.200 | 99.99th=[42206] 00:13:33.200 bw ( KiB/s): min= 4592, max=13224, per=38.48%, avg=9116.80, stdev=3084.92, samples=5 00:13:33.200 iops : min= 1148, max= 3306, avg=2279.20, stdev=771.23, samples=5 00:13:33.200 lat (usec) : 250=1.35%, 500=97.69%, 750=0.64%, 1000=0.03% 00:13:33.200 lat (msec) : 2=0.04%, 10=0.03%, 50=0.21% 00:13:33.200 cpu : usr=1.79%, sys=4.86%, ctx=7060, majf=0, minf=1 00:13:33.200 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:33.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.200 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.200 issued rwts: total=7059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.200 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:33.200 00:13:33.200 Run status group 0 (all jobs): 00:13:33.200 READ: bw=23.1MiB/s (24.3MB/s), 220KiB/s-9728KiB/s (225kB/s-9962kB/s), io=88.6MiB (92.9MB), run=2902-3831msec 00:13:33.200 00:13:33.200 Disk stats (read/write): 00:13:33.200 nvme0n1: ios=6872/0, merge=0/0, ticks=3145/0, in_queue=3145, util=95.94% 00:13:33.200 nvme0n2: ios=8554/0, merge=0/0, ticks=3364/0, in_queue=3364, util=96.11% 00:13:33.200 nvme0n3: ios=172/0, merge=0/0, ticks=3067/0, in_queue=3067, util=96.79% 00:13:33.200 nvme0n4: ios=6915/0, merge=0/0, ticks=2837/0, in_queue=2837, util=98.91% 00:13:33.457 14:18:14 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:33.457 14:18:14 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:33.715 14:18:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:33.715 14:18:15 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:33.972 14:18:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:33.972 14:18:15 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:34.229 14:18:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:34.229 14:18:15 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:34.487 14:18:15 -- target/fio.sh@69 -- # fio_status=0 00:13:34.487 14:18:15 -- target/fio.sh@70 -- # wait 3144998 00:13:34.487 14:18:15 -- target/fio.sh@70 -- # fio_status=4 00:13:34.487 14:18:15 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.487 14:18:15 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:34.487 14:18:15 -- common/autotest_common.sh@1205 -- # local i=0 00:13:34.487 14:18:15 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:34.487 14:18:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.487 14:18:16 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:34.487 14:18:16 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.487 14:18:16 -- common/autotest_common.sh@1217 -- # return 0 00:13:34.487 14:18:16 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:34.487 14:18:16 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:34.487 nvmf hotplug test: fio failed as expected 00:13:34.487 14:18:16 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.745 14:18:16 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:34.745 14:18:16 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:34.745 14:18:16 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:34.745 14:18:16 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:34.745 14:18:16 -- target/fio.sh@91 -- # nvmftestfini 00:13:34.745 14:18:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:34.745 14:18:16 -- nvmf/common.sh@117 -- # sync 00:13:34.745 14:18:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:34.745 14:18:16 -- nvmf/common.sh@120 -- # set +e 00:13:34.745 14:18:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:34.745 14:18:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:34.745 rmmod nvme_tcp 00:13:34.745 rmmod nvme_fabrics 00:13:34.745 rmmod nvme_keyring 00:13:35.003 14:18:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:35.003 14:18:16 -- nvmf/common.sh@124 -- # set -e 00:13:35.003 14:18:16 -- nvmf/common.sh@125 -- # return 0 00:13:35.003 14:18:16 -- nvmf/common.sh@478 -- # '[' -n 3142882 ']' 00:13:35.003 14:18:16 -- nvmf/common.sh@479 -- # killprocess 3142882 00:13:35.003 14:18:16 -- common/autotest_common.sh@936 -- # '[' -z 3142882 ']' 00:13:35.003 14:18:16 -- common/autotest_common.sh@940 -- # kill -0 3142882 00:13:35.003 14:18:16 -- common/autotest_common.sh@941 -- # uname 00:13:35.003 14:18:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:35.003 14:18:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3142882 00:13:35.003 14:18:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:35.003 14:18:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:35.003 14:18:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3142882' 00:13:35.003 killing process with pid 3142882 00:13:35.003 14:18:16 -- common/autotest_common.sh@955 -- # kill 3142882 00:13:35.003 14:18:16 -- common/autotest_common.sh@960 -- # wait 3142882 00:13:35.264 14:18:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:35.264 14:18:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:35.264 14:18:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:35.264 14:18:16 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:35.264 14:18:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:35.264 14:18:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.264 14:18:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.264 14:18:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.171 14:18:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:37.171 00:13:37.171 real 0m22.714s 00:13:37.171 user 1m19.789s 00:13:37.171 sys 0m6.548s 00:13:37.171 14:18:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:37.171 14:18:18 -- common/autotest_common.sh@10 -- # set +x 00:13:37.171 ************************************ 00:13:37.171 END TEST nvmf_fio_target 00:13:37.171 ************************************ 00:13:37.171 14:18:18 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:37.171 14:18:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:37.171 14:18:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:37.171 14:18:18 -- common/autotest_common.sh@10 -- # set +x 00:13:37.429 ************************************ 00:13:37.429 START TEST nvmf_bdevio 00:13:37.429 ************************************ 00:13:37.429 14:18:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:37.429 * Looking for test storage... 00:13:37.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.429 14:18:18 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:37.429 14:18:18 -- nvmf/common.sh@7 -- # uname -s 00:13:37.429 14:18:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.429 14:18:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.429 14:18:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.429 14:18:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.429 14:18:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.429 14:18:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.429 14:18:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.429 14:18:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.429 14:18:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.429 14:18:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.429 14:18:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:37.429 14:18:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:37.429 14:18:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.429 14:18:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.429 14:18:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:37.429 14:18:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.429 14:18:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:37.429 14:18:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.429 14:18:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.429 14:18:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.430 14:18:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.430 14:18:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.430 14:18:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.430 14:18:18 -- paths/export.sh@5 -- # export PATH 00:13:37.430 14:18:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.430 14:18:18 -- nvmf/common.sh@47 -- # : 0 00:13:37.430 14:18:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:37.430 14:18:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:37.430 14:18:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.430 14:18:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.430 14:18:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.430 14:18:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:37.430 14:18:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:37.430 14:18:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:37.430 14:18:18 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:37.430 14:18:18 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:37.430 14:18:18 -- target/bdevio.sh@14 -- # nvmftestinit 00:13:37.430 14:18:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:37.430 14:18:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.430 14:18:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:37.430 14:18:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:37.430 14:18:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:37.430 14:18:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.430 14:18:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.430 14:18:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.430 14:18:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:37.430 14:18:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:37.430 14:18:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:37.430 14:18:18 -- common/autotest_common.sh@10 -- # set +x 00:13:39.336 14:18:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:39.336 14:18:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:39.336 14:18:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:39.336 14:18:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:39.336 14:18:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:39.336 14:18:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:39.336 14:18:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:39.336 14:18:20 -- nvmf/common.sh@295 -- # net_devs=() 00:13:39.336 14:18:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:39.336 14:18:20 -- nvmf/common.sh@296 -- # e810=() 00:13:39.336 14:18:20 -- nvmf/common.sh@296 -- # local -ga e810 00:13:39.336 14:18:20 -- nvmf/common.sh@297 -- # x722=() 00:13:39.336 14:18:20 -- nvmf/common.sh@297 -- # local -ga x722 00:13:39.336 14:18:20 -- nvmf/common.sh@298 -- # mlx=() 00:13:39.336 14:18:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:39.336 14:18:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:39.337 14:18:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:39.337 14:18:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:39.337 14:18:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:39.337 14:18:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:39.337 14:18:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:39.337 14:18:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:39.337 14:18:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:39.337 14:18:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:39.337 14:18:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:39.337 14:18:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:39.337 14:18:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:39.337 14:18:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:39.337 14:18:20 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:39.337 14:18:20 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:39.337 14:18:20 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:39.337 14:18:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:39.337 14:18:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:39.337 14:18:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:39.337 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:39.337 14:18:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:39.337 14:18:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:39.337 14:18:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.337 14:18:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.337 14:18:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:39.337 14:18:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:39.337 14:18:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:39.337 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:39.337 14:18:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:39.337 14:18:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:39.337 14:18:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.337 14:18:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.337 14:18:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:39.337 14:18:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:39.337 14:18:20 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:39.337 14:18:20 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:39.337 14:18:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:39.337 14:18:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.337 14:18:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:39.337 14:18:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.337 14:18:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:39.337 Found net devices under 0000:08:00.0: cvl_0_0 00:13:39.337 14:18:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.337 14:18:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:39.337 14:18:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.337 14:18:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:39.337 14:18:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.337 14:18:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:39.337 Found net devices under 0000:08:00.1: cvl_0_1 00:13:39.337 14:18:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.337 14:18:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:39.337 14:18:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:39.337 14:18:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:39.337 14:18:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:39.337 14:18:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:39.337 14:18:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.337 14:18:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.337 14:18:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:39.337 14:18:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:39.337 14:18:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:39.337 14:18:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:39.337 14:18:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:39.337 14:18:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:39.337 14:18:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.337 14:18:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:39.337 14:18:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:39.337 14:18:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:39.337 14:18:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:39.337 14:18:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:39.337 14:18:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:39.337 14:18:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:39.337 14:18:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:39.337 14:18:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:39.337 14:18:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:39.337 14:18:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:39.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:39.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:13:39.337 00:13:39.337 --- 10.0.0.2 ping statistics --- 00:13:39.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.337 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:13:39.337 14:18:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:39.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:39.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:13:39.337 00:13:39.337 --- 10.0.0.1 ping statistics --- 00:13:39.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.337 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:13:39.337 14:18:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:39.337 14:18:20 -- nvmf/common.sh@411 -- # return 0 00:13:39.337 14:18:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:39.337 14:18:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:39.337 14:18:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:39.337 14:18:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:39.337 14:18:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:39.337 14:18:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:39.337 14:18:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:39.337 14:18:20 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:39.337 14:18:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:39.337 14:18:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:39.337 14:18:20 -- common/autotest_common.sh@10 -- # set +x 00:13:39.337 14:18:20 -- nvmf/common.sh@470 -- # nvmfpid=3147112 00:13:39.337 14:18:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:39.337 14:18:20 -- nvmf/common.sh@471 -- # waitforlisten 3147112 00:13:39.337 14:18:20 -- common/autotest_common.sh@817 -- # '[' -z 3147112 ']' 00:13:39.337 14:18:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.337 14:18:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:39.337 14:18:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.337 14:18:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:39.337 14:18:20 -- common/autotest_common.sh@10 -- # set +x 00:13:39.337 [2024-04-26 14:18:20.619464] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:13:39.337 [2024-04-26 14:18:20.619561] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.337 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.337 [2024-04-26 14:18:20.685713] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:39.337 [2024-04-26 14:18:20.805648] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.337 [2024-04-26 14:18:20.805711] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.337 [2024-04-26 14:18:20.805726] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.337 [2024-04-26 14:18:20.805739] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.337 [2024-04-26 14:18:20.805751] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.337 [2024-04-26 14:18:20.805844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:39.337 [2024-04-26 14:18:20.805898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:39.337 [2024-04-26 14:18:20.805946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:39.337 [2024-04-26 14:18:20.805949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:39.596 14:18:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:39.596 14:18:20 -- common/autotest_common.sh@850 -- # return 0 00:13:39.596 14:18:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:39.596 14:18:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:39.596 14:18:20 -- common/autotest_common.sh@10 -- # set +x 00:13:39.596 14:18:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.596 14:18:20 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:39.596 14:18:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.596 14:18:20 -- common/autotest_common.sh@10 -- # set +x 00:13:39.596 [2024-04-26 14:18:20.953359] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:39.596 14:18:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.596 14:18:20 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:39.596 14:18:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.596 14:18:20 -- common/autotest_common.sh@10 -- # set +x 00:13:39.596 Malloc0 00:13:39.596 14:18:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.596 14:18:20 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:39.596 14:18:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.596 14:18:20 -- common/autotest_common.sh@10 -- # set +x 00:13:39.596 14:18:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.596 14:18:20 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:39.596 14:18:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.596 14:18:20 -- common/autotest_common.sh@10 -- # set +x 00:13:39.596 14:18:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.596 14:18:20 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.596 14:18:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.596 14:18:20 -- common/autotest_common.sh@10 -- # set +x 00:13:39.596 [2024-04-26 14:18:21.002297] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.596 14:18:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.596 14:18:21 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:39.596 14:18:21 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:39.596 14:18:21 -- nvmf/common.sh@521 -- # config=() 00:13:39.596 14:18:21 -- nvmf/common.sh@521 -- # local subsystem config 00:13:39.596 14:18:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:39.596 14:18:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:39.596 { 00:13:39.596 "params": { 00:13:39.596 "name": "Nvme$subsystem", 00:13:39.596 "trtype": "$TEST_TRANSPORT", 00:13:39.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:39.596 "adrfam": "ipv4", 00:13:39.596 "trsvcid": "$NVMF_PORT", 00:13:39.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:39.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:39.596 "hdgst": ${hdgst:-false}, 00:13:39.596 "ddgst": ${ddgst:-false} 00:13:39.596 }, 00:13:39.596 "method": "bdev_nvme_attach_controller" 00:13:39.596 } 00:13:39.596 EOF 00:13:39.596 )") 00:13:39.596 14:18:21 -- nvmf/common.sh@543 -- # cat 00:13:39.596 14:18:21 -- nvmf/common.sh@545 -- # jq . 00:13:39.596 14:18:21 -- nvmf/common.sh@546 -- # IFS=, 00:13:39.596 14:18:21 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:39.596 "params": { 00:13:39.596 "name": "Nvme1", 00:13:39.596 "trtype": "tcp", 00:13:39.596 "traddr": "10.0.0.2", 00:13:39.596 "adrfam": "ipv4", 00:13:39.596 "trsvcid": "4420", 00:13:39.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:39.596 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:39.596 "hdgst": false, 00:13:39.596 "ddgst": false 00:13:39.596 }, 00:13:39.596 "method": "bdev_nvme_attach_controller" 00:13:39.596 }' 00:13:39.596 [2024-04-26 14:18:21.050714] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:13:39.596 [2024-04-26 14:18:21.050815] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3147231 ] 00:13:39.596 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.596 [2024-04-26 14:18:21.110936] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:39.854 [2024-04-26 14:18:21.227017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.854 [2024-04-26 14:18:21.227097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.854 [2024-04-26 14:18:21.227131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.854 I/O targets: 00:13:39.854 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:39.854 00:13:39.854 00:13:39.854 CUnit - A unit testing framework for C - Version 2.1-3 00:13:39.854 http://cunit.sourceforge.net/ 00:13:39.854 00:13:39.854 00:13:39.854 Suite: bdevio tests on: Nvme1n1 00:13:40.112 Test: blockdev write read block ...passed 00:13:40.112 Test: blockdev write zeroes read block ...passed 00:13:40.112 Test: blockdev write zeroes read no split ...passed 00:13:40.112 Test: blockdev write zeroes read split ...passed 00:13:40.112 Test: blockdev write zeroes read split partial ...passed 00:13:40.112 Test: blockdev reset ...[2024-04-26 14:18:21.515267] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:40.112 [2024-04-26 14:18:21.515395] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113da40 (9): Bad file descriptor 00:13:40.112 [2024-04-26 14:18:21.534005] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:40.112 passed 00:13:40.112 Test: blockdev write read 8 blocks ...passed 00:13:40.112 Test: blockdev write read size > 128k ...passed 00:13:40.112 Test: blockdev write read invalid size ...passed 00:13:40.112 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:40.112 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:40.112 Test: blockdev write read max offset ...passed 00:13:40.112 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:40.112 Test: blockdev writev readv 8 blocks ...passed 00:13:40.112 Test: blockdev writev readv 30 x 1block ...passed 00:13:40.371 Test: blockdev writev readv block ...passed 00:13:40.371 Test: blockdev writev readv size > 128k ...passed 00:13:40.371 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:40.371 Test: blockdev comparev and writev ...[2024-04-26 14:18:21.709504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:40.371 [2024-04-26 14:18:21.709555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:40.371 [2024-04-26 14:18:21.709584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:40.371 [2024-04-26 14:18:21.709602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:40.371 [2024-04-26 14:18:21.709957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:40.371 [2024-04-26 14:18:21.709983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:40.371 [2024-04-26 14:18:21.710008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:40.371 [2024-04-26 14:18:21.710026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:40.371 [2024-04-26 14:18:21.710350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:40.371 [2024-04-26 14:18:21.710377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:40.371 [2024-04-26 14:18:21.710401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:40.371 [2024-04-26 14:18:21.710419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:40.371 [2024-04-26 14:18:21.710773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:40.371 [2024-04-26 14:18:21.710798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:40.371 [2024-04-26 14:18:21.710822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:40.371 [2024-04-26 14:18:21.710839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:40.371 passed 00:13:40.371 Test: blockdev nvme passthru rw ...passed 00:13:40.371 Test: blockdev nvme passthru vendor specific ...[2024-04-26 14:18:21.794001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:40.371 [2024-04-26 14:18:21.794032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:40.371 [2024-04-26 14:18:21.794222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:40.371 [2024-04-26 14:18:21.794246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:40.371 [2024-04-26 14:18:21.794436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:40.371 [2024-04-26 14:18:21.794461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:40.371 [2024-04-26 14:18:21.794656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:40.371 [2024-04-26 14:18:21.794681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:40.371 passed 00:13:40.371 Test: blockdev nvme admin passthru ...passed 00:13:40.371 Test: blockdev copy ...passed 00:13:40.371 00:13:40.371 Run Summary: Type Total Ran Passed Failed Inactive 00:13:40.371 suites 1 1 n/a 0 0 00:13:40.371 tests 23 23 23 0 0 00:13:40.371 asserts 152 152 152 0 n/a 00:13:40.371 00:13:40.371 Elapsed time = 0.908 seconds 00:13:40.631 14:18:22 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.631 14:18:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:40.631 14:18:22 -- common/autotest_common.sh@10 -- # set +x 00:13:40.631 14:18:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:40.631 14:18:22 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:40.631 14:18:22 -- target/bdevio.sh@30 -- # nvmftestfini 00:13:40.631 14:18:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:40.631 14:18:22 -- nvmf/common.sh@117 -- # sync 00:13:40.631 14:18:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:40.631 14:18:22 -- nvmf/common.sh@120 -- # set +e 00:13:40.631 14:18:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:40.631 14:18:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:40.631 rmmod nvme_tcp 00:13:40.631 rmmod nvme_fabrics 00:13:40.631 rmmod nvme_keyring 00:13:40.631 14:18:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:40.631 14:18:22 -- nvmf/common.sh@124 -- # set -e 00:13:40.631 14:18:22 -- nvmf/common.sh@125 -- # return 0 00:13:40.631 14:18:22 -- nvmf/common.sh@478 -- # '[' -n 3147112 ']' 00:13:40.631 14:18:22 -- nvmf/common.sh@479 -- # killprocess 3147112 00:13:40.631 14:18:22 -- common/autotest_common.sh@936 -- # '[' -z 3147112 ']' 00:13:40.631 14:18:22 -- common/autotest_common.sh@940 -- # kill -0 3147112 00:13:40.631 14:18:22 -- common/autotest_common.sh@941 -- # uname 00:13:40.631 14:18:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:40.631 14:18:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3147112 00:13:40.631 14:18:22 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:13:40.631 14:18:22 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:13:40.631 14:18:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3147112' 00:13:40.631 killing process with pid 3147112 00:13:40.631 14:18:22 -- common/autotest_common.sh@955 -- # kill 3147112 00:13:40.631 14:18:22 -- common/autotest_common.sh@960 -- # wait 3147112 00:13:40.890 14:18:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:40.890 14:18:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:40.890 14:18:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:40.890 14:18:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:40.890 14:18:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:40.890 14:18:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.890 14:18:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:40.890 14:18:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.432 14:18:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:43.432 00:13:43.432 real 0m5.661s 00:13:43.432 user 0m8.572s 00:13:43.432 sys 0m1.699s 00:13:43.432 14:18:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:43.432 14:18:24 -- common/autotest_common.sh@10 -- # set +x 00:13:43.432 ************************************ 00:13:43.432 END TEST nvmf_bdevio 00:13:43.432 ************************************ 00:13:43.432 14:18:24 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:13:43.432 14:18:24 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:43.432 14:18:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:43.432 14:18:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:43.432 14:18:24 -- common/autotest_common.sh@10 -- # set +x 00:13:43.432 ************************************ 00:13:43.432 START TEST nvmf_bdevio_no_huge 00:13:43.432 ************************************ 00:13:43.432 14:18:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:43.432 * Looking for test storage... 00:13:43.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.432 14:18:24 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.432 14:18:24 -- nvmf/common.sh@7 -- # uname -s 00:13:43.432 14:18:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.432 14:18:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.432 14:18:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.432 14:18:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.432 14:18:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.432 14:18:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.432 14:18:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.432 14:18:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.432 14:18:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.432 14:18:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.432 14:18:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:43.432 14:18:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:43.432 14:18:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.432 14:18:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.432 14:18:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.432 14:18:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.432 14:18:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.432 14:18:24 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.432 14:18:24 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.432 14:18:24 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.432 14:18:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.432 14:18:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.432 14:18:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.432 14:18:24 -- paths/export.sh@5 -- # export PATH 00:13:43.432 14:18:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.432 14:18:24 -- nvmf/common.sh@47 -- # : 0 00:13:43.432 14:18:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.432 14:18:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.432 14:18:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.432 14:18:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.432 14:18:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.432 14:18:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.432 14:18:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.432 14:18:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.432 14:18:24 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:43.432 14:18:24 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:43.432 14:18:24 -- target/bdevio.sh@14 -- # nvmftestinit 00:13:43.432 14:18:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:43.432 14:18:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.432 14:18:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:43.432 14:18:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:43.432 14:18:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:43.432 14:18:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.432 14:18:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.432 14:18:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.432 14:18:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:43.432 14:18:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:43.432 14:18:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:43.432 14:18:24 -- common/autotest_common.sh@10 -- # set +x 00:13:44.811 14:18:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:44.811 14:18:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:44.811 14:18:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:44.811 14:18:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:44.811 14:18:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:44.811 14:18:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:44.811 14:18:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:44.811 14:18:26 -- nvmf/common.sh@295 -- # net_devs=() 00:13:44.811 14:18:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:44.811 14:18:26 -- nvmf/common.sh@296 -- # e810=() 00:13:44.811 14:18:26 -- nvmf/common.sh@296 -- # local -ga e810 00:13:44.811 14:18:26 -- nvmf/common.sh@297 -- # x722=() 00:13:44.811 14:18:26 -- nvmf/common.sh@297 -- # local -ga x722 00:13:44.811 14:18:26 -- nvmf/common.sh@298 -- # mlx=() 00:13:44.811 14:18:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:44.811 14:18:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:44.811 14:18:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:44.811 14:18:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:44.811 14:18:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:44.811 14:18:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:44.811 14:18:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:44.811 14:18:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:44.811 14:18:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:44.811 14:18:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:44.811 14:18:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:44.811 14:18:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:44.811 14:18:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:44.811 14:18:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:44.811 14:18:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:44.811 14:18:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:44.811 14:18:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:44.811 14:18:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:44.811 14:18:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:44.811 14:18:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:44.811 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:44.811 14:18:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:44.811 14:18:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:44.811 14:18:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.811 14:18:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.811 14:18:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:44.811 14:18:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:44.811 14:18:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:44.811 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:44.811 14:18:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:44.811 14:18:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:44.811 14:18:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.811 14:18:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.811 14:18:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:44.811 14:18:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:44.811 14:18:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:44.811 14:18:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:44.811 14:18:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:44.811 14:18:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.811 14:18:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:44.811 14:18:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.811 14:18:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:44.811 Found net devices under 0000:08:00.0: cvl_0_0 00:13:44.811 14:18:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.811 14:18:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:44.811 14:18:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.811 14:18:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:44.811 14:18:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.811 14:18:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:44.811 Found net devices under 0000:08:00.1: cvl_0_1 00:13:44.811 14:18:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.811 14:18:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:44.811 14:18:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:44.811 14:18:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:44.811 14:18:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:44.811 14:18:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:44.811 14:18:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.811 14:18:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.811 14:18:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:44.811 14:18:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:44.811 14:18:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:44.811 14:18:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:44.811 14:18:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:44.811 14:18:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:44.811 14:18:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.811 14:18:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:44.811 14:18:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:44.811 14:18:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:44.811 14:18:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:44.811 14:18:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:44.811 14:18:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:44.811 14:18:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:44.811 14:18:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:44.811 14:18:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:44.811 14:18:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:44.811 14:18:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:44.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:13:44.811 00:13:44.811 --- 10.0.0.2 ping statistics --- 00:13:44.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.811 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:13:44.811 14:18:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:44.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:13:44.811 00:13:44.811 --- 10.0.0.1 ping statistics --- 00:13:44.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.811 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:13:44.811 14:18:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.811 14:18:26 -- nvmf/common.sh@411 -- # return 0 00:13:44.811 14:18:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:44.811 14:18:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.811 14:18:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:44.811 14:18:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:44.811 14:18:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.811 14:18:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:44.811 14:18:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:44.811 14:18:26 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:44.811 14:18:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:44.811 14:18:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:44.811 14:18:26 -- common/autotest_common.sh@10 -- # set +x 00:13:44.811 14:18:26 -- nvmf/common.sh@470 -- # nvmfpid=3148830 00:13:44.811 14:18:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:44.811 14:18:26 -- nvmf/common.sh@471 -- # waitforlisten 3148830 00:13:44.811 14:18:26 -- common/autotest_common.sh@817 -- # '[' -z 3148830 ']' 00:13:44.811 14:18:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.811 14:18:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:44.811 14:18:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.811 14:18:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:44.811 14:18:26 -- common/autotest_common.sh@10 -- # set +x 00:13:45.070 [2024-04-26 14:18:26.408624] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:13:45.070 [2024-04-26 14:18:26.408724] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:45.070 [2024-04-26 14:18:26.480992] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:45.070 [2024-04-26 14:18:26.599274] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.070 [2024-04-26 14:18:26.599331] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.070 [2024-04-26 14:18:26.599348] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.070 [2024-04-26 14:18:26.599361] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.070 [2024-04-26 14:18:26.599373] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.070 [2024-04-26 14:18:26.599465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:45.070 [2024-04-26 14:18:26.599569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:45.070 [2024-04-26 14:18:26.599572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.070 [2024-04-26 14:18:26.599516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:45.329 14:18:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:45.329 14:18:26 -- common/autotest_common.sh@850 -- # return 0 00:13:45.329 14:18:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:45.329 14:18:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:45.329 14:18:26 -- common/autotest_common.sh@10 -- # set +x 00:13:45.329 14:18:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.329 14:18:26 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:45.329 14:18:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.329 14:18:26 -- common/autotest_common.sh@10 -- # set +x 00:13:45.329 [2024-04-26 14:18:26.730427] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.329 14:18:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.329 14:18:26 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:45.329 14:18:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.329 14:18:26 -- common/autotest_common.sh@10 -- # set +x 00:13:45.329 Malloc0 00:13:45.329 14:18:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.329 14:18:26 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:45.329 14:18:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.329 14:18:26 -- common/autotest_common.sh@10 -- # set +x 00:13:45.329 14:18:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.329 14:18:26 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:45.329 14:18:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.329 14:18:26 -- common/autotest_common.sh@10 -- # set +x 00:13:45.329 14:18:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.329 14:18:26 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.329 14:18:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:45.329 14:18:26 -- common/autotest_common.sh@10 -- # set +x 00:13:45.329 [2024-04-26 14:18:26.768832] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.329 14:18:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:45.329 14:18:26 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:45.329 14:18:26 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:45.329 14:18:26 -- nvmf/common.sh@521 -- # config=() 00:13:45.330 14:18:26 -- nvmf/common.sh@521 -- # local subsystem config 00:13:45.330 14:18:26 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:45.330 14:18:26 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:45.330 { 00:13:45.330 "params": { 00:13:45.330 "name": "Nvme$subsystem", 00:13:45.330 "trtype": "$TEST_TRANSPORT", 00:13:45.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:45.330 "adrfam": "ipv4", 00:13:45.330 "trsvcid": "$NVMF_PORT", 00:13:45.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:45.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:45.330 "hdgst": ${hdgst:-false}, 00:13:45.330 "ddgst": ${ddgst:-false} 00:13:45.330 }, 00:13:45.330 "method": "bdev_nvme_attach_controller" 00:13:45.330 } 00:13:45.330 EOF 00:13:45.330 )") 00:13:45.330 14:18:26 -- nvmf/common.sh@543 -- # cat 00:13:45.330 14:18:26 -- nvmf/common.sh@545 -- # jq . 00:13:45.330 14:18:26 -- nvmf/common.sh@546 -- # IFS=, 00:13:45.330 14:18:26 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:45.330 "params": { 00:13:45.330 "name": "Nvme1", 00:13:45.330 "trtype": "tcp", 00:13:45.330 "traddr": "10.0.0.2", 00:13:45.330 "adrfam": "ipv4", 00:13:45.330 "trsvcid": "4420", 00:13:45.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:45.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:45.330 "hdgst": false, 00:13:45.330 "ddgst": false 00:13:45.330 }, 00:13:45.330 "method": "bdev_nvme_attach_controller" 00:13:45.330 }' 00:13:45.330 [2024-04-26 14:18:26.815478] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:13:45.330 [2024-04-26 14:18:26.815578] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3148863 ] 00:13:45.330 [2024-04-26 14:18:26.880181] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:45.588 [2024-04-26 14:18:27.002745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.588 [2024-04-26 14:18:27.002826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.588 [2024-04-26 14:18:27.002860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.846 I/O targets: 00:13:45.846 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:45.846 00:13:45.846 00:13:45.846 CUnit - A unit testing framework for C - Version 2.1-3 00:13:45.846 http://cunit.sourceforge.net/ 00:13:45.846 00:13:45.846 00:13:45.846 Suite: bdevio tests on: Nvme1n1 00:13:45.846 Test: blockdev write read block ...passed 00:13:45.846 Test: blockdev write zeroes read block ...passed 00:13:45.846 Test: blockdev write zeroes read no split ...passed 00:13:45.846 Test: blockdev write zeroes read split ...passed 00:13:45.846 Test: blockdev write zeroes read split partial ...passed 00:13:45.846 Test: blockdev reset ...[2024-04-26 14:18:27.330596] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:45.846 [2024-04-26 14:18:27.330731] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d653e0 (9): Bad file descriptor 00:13:45.846 [2024-04-26 14:18:27.342963] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:45.847 passed 00:13:45.847 Test: blockdev write read 8 blocks ...passed 00:13:45.847 Test: blockdev write read size > 128k ...passed 00:13:45.847 Test: blockdev write read invalid size ...passed 00:13:45.847 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:45.847 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:45.847 Test: blockdev write read max offset ...passed 00:13:46.104 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:46.104 Test: blockdev writev readv 8 blocks ...passed 00:13:46.104 Test: blockdev writev readv 30 x 1block ...passed 00:13:46.104 Test: blockdev writev readv block ...passed 00:13:46.104 Test: blockdev writev readv size > 128k ...passed 00:13:46.104 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:46.104 Test: blockdev comparev and writev ...[2024-04-26 14:18:27.518294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.104 [2024-04-26 14:18:27.518335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:46.105 [2024-04-26 14:18:27.518362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.105 [2024-04-26 14:18:27.518380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:46.105 [2024-04-26 14:18:27.518767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.105 [2024-04-26 14:18:27.518793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:46.105 [2024-04-26 14:18:27.518817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.105 [2024-04-26 14:18:27.518834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:46.105 [2024-04-26 14:18:27.519182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.105 [2024-04-26 14:18:27.519209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:46.105 [2024-04-26 14:18:27.519233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.105 [2024-04-26 14:18:27.519249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:46.105 [2024-04-26 14:18:27.519586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.105 [2024-04-26 14:18:27.519610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:46.105 [2024-04-26 14:18:27.519641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:46.105 [2024-04-26 14:18:27.519660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:46.105 passed 00:13:46.105 Test: blockdev nvme passthru rw ...passed 00:13:46.105 Test: blockdev nvme passthru vendor specific ...[2024-04-26 14:18:27.602928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.105 [2024-04-26 14:18:27.602957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:46.105 [2024-04-26 14:18:27.603132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.105 [2024-04-26 14:18:27.603156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:46.105 [2024-04-26 14:18:27.603324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.105 [2024-04-26 14:18:27.603347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:46.105 [2024-04-26 14:18:27.603516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:46.105 [2024-04-26 14:18:27.603540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:46.105 passed 00:13:46.105 Test: blockdev nvme admin passthru ...passed 00:13:46.105 Test: blockdev copy ...passed 00:13:46.105 00:13:46.105 Run Summary: Type Total Ran Passed Failed Inactive 00:13:46.105 suites 1 1 n/a 0 0 00:13:46.105 tests 23 23 23 0 0 00:13:46.105 asserts 152 152 152 0 n/a 00:13:46.105 00:13:46.105 Elapsed time = 0.995 seconds 00:13:46.672 14:18:28 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:46.672 14:18:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.672 14:18:28 -- common/autotest_common.sh@10 -- # set +x 00:13:46.672 14:18:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.672 14:18:28 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:46.672 14:18:28 -- target/bdevio.sh@30 -- # nvmftestfini 00:13:46.672 14:18:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:46.672 14:18:28 -- nvmf/common.sh@117 -- # sync 00:13:46.672 14:18:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:46.672 14:18:28 -- nvmf/common.sh@120 -- # set +e 00:13:46.672 14:18:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:46.672 14:18:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:46.672 rmmod nvme_tcp 00:13:46.672 rmmod nvme_fabrics 00:13:46.672 rmmod nvme_keyring 00:13:46.672 14:18:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:46.672 14:18:28 -- nvmf/common.sh@124 -- # set -e 00:13:46.672 14:18:28 -- nvmf/common.sh@125 -- # return 0 00:13:46.672 14:18:28 -- nvmf/common.sh@478 -- # '[' -n 3148830 ']' 00:13:46.672 14:18:28 -- nvmf/common.sh@479 -- # killprocess 3148830 00:13:46.672 14:18:28 -- common/autotest_common.sh@936 -- # '[' -z 3148830 ']' 00:13:46.672 14:18:28 -- common/autotest_common.sh@940 -- # kill -0 3148830 00:13:46.672 14:18:28 -- common/autotest_common.sh@941 -- # uname 00:13:46.672 14:18:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:46.672 14:18:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3148830 00:13:46.672 14:18:28 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:13:46.672 14:18:28 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:13:46.672 14:18:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3148830' 00:13:46.672 killing process with pid 3148830 00:13:46.672 14:18:28 -- common/autotest_common.sh@955 -- # kill 3148830 00:13:46.672 14:18:28 -- common/autotest_common.sh@960 -- # wait 3148830 00:13:47.240 14:18:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:47.240 14:18:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:47.240 14:18:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:47.240 14:18:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:47.240 14:18:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:47.240 14:18:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.240 14:18:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.240 14:18:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.146 14:18:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:49.146 00:13:49.146 real 0m6.005s 00:13:49.146 user 0m9.738s 00:13:49.146 sys 0m2.191s 00:13:49.146 14:18:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:49.146 14:18:30 -- common/autotest_common.sh@10 -- # set +x 00:13:49.146 ************************************ 00:13:49.146 END TEST nvmf_bdevio_no_huge 00:13:49.146 ************************************ 00:13:49.146 14:18:30 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:49.146 14:18:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:49.146 14:18:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:49.146 14:18:30 -- common/autotest_common.sh@10 -- # set +x 00:13:49.146 ************************************ 00:13:49.146 START TEST nvmf_tls 00:13:49.146 ************************************ 00:13:49.146 14:18:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:49.404 * Looking for test storage... 00:13:49.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.404 14:18:30 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.404 14:18:30 -- nvmf/common.sh@7 -- # uname -s 00:13:49.404 14:18:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.404 14:18:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.404 14:18:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.404 14:18:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.404 14:18:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.404 14:18:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.404 14:18:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.404 14:18:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.404 14:18:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.404 14:18:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.404 14:18:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:49.404 14:18:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:49.404 14:18:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.404 14:18:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.404 14:18:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.404 14:18:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.404 14:18:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.404 14:18:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.404 14:18:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.404 14:18:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.404 14:18:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.404 14:18:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.404 14:18:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.404 14:18:30 -- paths/export.sh@5 -- # export PATH 00:13:49.405 14:18:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.405 14:18:30 -- nvmf/common.sh@47 -- # : 0 00:13:49.405 14:18:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:49.405 14:18:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:49.405 14:18:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.405 14:18:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.405 14:18:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.405 14:18:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:49.405 14:18:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:49.405 14:18:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:49.405 14:18:30 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:49.405 14:18:30 -- target/tls.sh@62 -- # nvmftestinit 00:13:49.405 14:18:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:49.405 14:18:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.405 14:18:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:49.405 14:18:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:49.405 14:18:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:49.405 14:18:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.405 14:18:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.405 14:18:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.405 14:18:30 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:49.405 14:18:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:49.405 14:18:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:49.405 14:18:30 -- common/autotest_common.sh@10 -- # set +x 00:13:50.782 14:18:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:50.782 14:18:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:50.782 14:18:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:50.782 14:18:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:50.782 14:18:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:50.782 14:18:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:50.782 14:18:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:50.782 14:18:32 -- nvmf/common.sh@295 -- # net_devs=() 00:13:50.782 14:18:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:50.782 14:18:32 -- nvmf/common.sh@296 -- # e810=() 00:13:50.782 14:18:32 -- nvmf/common.sh@296 -- # local -ga e810 00:13:50.782 14:18:32 -- nvmf/common.sh@297 -- # x722=() 00:13:50.782 14:18:32 -- nvmf/common.sh@297 -- # local -ga x722 00:13:50.782 14:18:32 -- nvmf/common.sh@298 -- # mlx=() 00:13:50.782 14:18:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:50.782 14:18:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.782 14:18:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.782 14:18:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.782 14:18:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.782 14:18:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.782 14:18:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.782 14:18:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.782 14:18:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.782 14:18:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.782 14:18:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.782 14:18:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.783 14:18:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:50.783 14:18:32 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:50.783 14:18:32 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:50.783 14:18:32 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:50.783 14:18:32 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:50.783 14:18:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:50.783 14:18:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.783 14:18:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:50.783 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:50.783 14:18:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.783 14:18:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.783 14:18:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.783 14:18:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.783 14:18:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.783 14:18:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:50.783 14:18:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:50.783 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:50.783 14:18:32 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:50.783 14:18:32 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:50.783 14:18:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.783 14:18:32 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.783 14:18:32 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:50.783 14:18:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:50.783 14:18:32 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:50.783 14:18:32 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:50.783 14:18:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.783 14:18:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.783 14:18:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:50.783 14:18:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.783 14:18:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:50.783 Found net devices under 0000:08:00.0: cvl_0_0 00:13:50.783 14:18:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.783 14:18:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:50.783 14:18:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.783 14:18:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:50.783 14:18:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.783 14:18:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:50.783 Found net devices under 0000:08:00.1: cvl_0_1 00:13:50.783 14:18:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.783 14:18:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:50.783 14:18:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:50.783 14:18:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:50.783 14:18:32 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:50.783 14:18:32 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:50.783 14:18:32 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.783 14:18:32 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.783 14:18:32 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:50.783 14:18:32 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:50.783 14:18:32 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:50.783 14:18:32 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:50.783 14:18:32 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:50.783 14:18:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:50.783 14:18:32 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.783 14:18:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:50.783 14:18:32 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:50.783 14:18:32 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:51.041 14:18:32 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:51.041 14:18:32 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:51.041 14:18:32 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:51.041 14:18:32 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:51.041 14:18:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:51.041 14:18:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:51.041 14:18:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:51.041 14:18:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:51.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:13:51.041 00:13:51.041 --- 10.0.0.2 ping statistics --- 00:13:51.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.041 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:13:51.041 14:18:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:51.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:13:51.041 00:13:51.041 --- 10.0.0.1 ping statistics --- 00:13:51.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.041 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:13:51.041 14:18:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.041 14:18:32 -- nvmf/common.sh@411 -- # return 0 00:13:51.041 14:18:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:51.041 14:18:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.041 14:18:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:51.041 14:18:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:51.041 14:18:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.041 14:18:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:51.041 14:18:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:51.041 14:18:32 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:51.041 14:18:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:51.041 14:18:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:51.041 14:18:32 -- common/autotest_common.sh@10 -- # set +x 00:13:51.041 14:18:32 -- nvmf/common.sh@470 -- # nvmfpid=3150471 00:13:51.041 14:18:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:51.041 14:18:32 -- nvmf/common.sh@471 -- # waitforlisten 3150471 00:13:51.041 14:18:32 -- common/autotest_common.sh@817 -- # '[' -z 3150471 ']' 00:13:51.041 14:18:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.041 14:18:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:51.041 14:18:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.041 14:18:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:51.041 14:18:32 -- common/autotest_common.sh@10 -- # set +x 00:13:51.041 [2024-04-26 14:18:32.537320] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:13:51.041 [2024-04-26 14:18:32.537417] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.041 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.300 [2024-04-26 14:18:32.621378] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.300 [2024-04-26 14:18:32.772493] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.300 [2024-04-26 14:18:32.772570] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.300 [2024-04-26 14:18:32.772600] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.300 [2024-04-26 14:18:32.772625] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.300 [2024-04-26 14:18:32.772659] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.300 [2024-04-26 14:18:32.772727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.558 14:18:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:51.558 14:18:32 -- common/autotest_common.sh@850 -- # return 0 00:13:51.558 14:18:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:51.558 14:18:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:51.558 14:18:32 -- common/autotest_common.sh@10 -- # set +x 00:13:51.558 14:18:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.558 14:18:32 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:51.558 14:18:32 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:51.821 true 00:13:51.821 14:18:33 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:51.821 14:18:33 -- target/tls.sh@73 -- # jq -r .tls_version 00:13:52.082 14:18:33 -- target/tls.sh@73 -- # version=0 00:13:52.082 14:18:33 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:52.082 14:18:33 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:52.340 14:18:33 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:52.340 14:18:33 -- target/tls.sh@81 -- # jq -r .tls_version 00:13:52.598 14:18:34 -- target/tls.sh@81 -- # version=13 00:13:52.598 14:18:34 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:52.598 14:18:34 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:52.855 14:18:34 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:52.855 14:18:34 -- target/tls.sh@89 -- # jq -r .tls_version 00:13:53.113 14:18:34 -- target/tls.sh@89 -- # version=7 00:13:53.113 14:18:34 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:53.113 14:18:34 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:53.113 14:18:34 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:53.397 14:18:34 -- target/tls.sh@96 -- # ktls=false 00:13:53.397 14:18:34 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:53.397 14:18:34 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:53.655 14:18:35 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:53.655 14:18:35 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:53.914 14:18:35 -- target/tls.sh@104 -- # ktls=true 00:13:53.914 14:18:35 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:53.914 14:18:35 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:54.171 14:18:35 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:54.171 14:18:35 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:54.429 14:18:35 -- target/tls.sh@112 -- # ktls=false 00:13:54.429 14:18:35 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:54.429 14:18:35 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:54.430 14:18:35 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:54.430 14:18:35 -- nvmf/common.sh@691 -- # local prefix key digest 00:13:54.430 14:18:35 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:13:54.430 14:18:35 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:13:54.430 14:18:35 -- nvmf/common.sh@693 -- # digest=1 00:13:54.430 14:18:35 -- nvmf/common.sh@694 -- # python - 00:13:54.430 14:18:35 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:54.430 14:18:35 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:54.430 14:18:35 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:54.430 14:18:35 -- nvmf/common.sh@691 -- # local prefix key digest 00:13:54.430 14:18:35 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:13:54.430 14:18:35 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:13:54.430 14:18:35 -- nvmf/common.sh@693 -- # digest=1 00:13:54.430 14:18:35 -- nvmf/common.sh@694 -- # python - 00:13:54.430 14:18:35 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:54.430 14:18:35 -- target/tls.sh@121 -- # mktemp 00:13:54.430 14:18:35 -- target/tls.sh@121 -- # key_path=/tmp/tmp.2fkpFBchYI 00:13:54.430 14:18:35 -- target/tls.sh@122 -- # mktemp 00:13:54.430 14:18:35 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.lKB2XfFYFL 00:13:54.430 14:18:35 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:54.430 14:18:35 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:54.430 14:18:35 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.2fkpFBchYI 00:13:54.430 14:18:35 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.lKB2XfFYFL 00:13:54.430 14:18:35 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:54.689 14:18:36 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:13:55.256 14:18:36 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.2fkpFBchYI 00:13:55.256 14:18:36 -- target/tls.sh@49 -- # local key=/tmp/tmp.2fkpFBchYI 00:13:55.256 14:18:36 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:55.256 [2024-04-26 14:18:36.762710] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.256 14:18:36 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:55.514 14:18:37 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:55.772 [2024-04-26 14:18:37.243990] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:55.772 [2024-04-26 14:18:37.244217] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.772 14:18:37 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:56.030 malloc0 00:13:56.030 14:18:37 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:56.288 14:18:37 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2fkpFBchYI 00:13:56.545 [2024-04-26 14:18:37.972551] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:56.545 14:18:37 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.2fkpFBchYI 00:13:56.545 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.747 Initializing NVMe Controllers 00:14:08.747 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:08.747 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:08.747 Initialization complete. Launching workers. 00:14:08.747 ======================================================== 00:14:08.747 Latency(us) 00:14:08.747 Device Information : IOPS MiB/s Average min max 00:14:08.747 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7400.30 28.91 8651.23 1311.73 9793.39 00:14:08.747 ======================================================== 00:14:08.747 Total : 7400.30 28.91 8651.23 1311.73 9793.39 00:14:08.747 00:14:08.747 14:18:48 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2fkpFBchYI 00:14:08.747 14:18:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:08.747 14:18:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:08.747 14:18:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:08.747 14:18:48 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2fkpFBchYI' 00:14:08.747 14:18:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:08.747 14:18:48 -- target/tls.sh@28 -- # bdevperf_pid=3151928 00:14:08.747 14:18:48 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:08.747 14:18:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:08.747 14:18:48 -- target/tls.sh@31 -- # waitforlisten 3151928 /var/tmp/bdevperf.sock 00:14:08.747 14:18:48 -- common/autotest_common.sh@817 -- # '[' -z 3151928 ']' 00:14:08.747 14:18:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:08.747 14:18:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:08.747 14:18:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:08.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:08.747 14:18:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:08.747 14:18:48 -- common/autotest_common.sh@10 -- # set +x 00:14:08.747 [2024-04-26 14:18:48.150509] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:14:08.747 [2024-04-26 14:18:48.150606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3151928 ] 00:14:08.747 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.747 [2024-04-26 14:18:48.210766] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.747 [2024-04-26 14:18:48.329232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.747 14:18:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:08.747 14:18:48 -- common/autotest_common.sh@850 -- # return 0 00:14:08.747 14:18:48 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2fkpFBchYI 00:14:08.747 [2024-04-26 14:18:48.703020] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:08.747 [2024-04-26 14:18:48.703158] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:08.747 TLSTESTn1 00:14:08.747 14:18:48 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:08.747 Running I/O for 10 seconds... 00:14:18.755 00:14:18.755 Latency(us) 00:14:18.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.755 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:18.755 Verification LBA range: start 0x0 length 0x2000 00:14:18.755 TLSTESTn1 : 10.02 3292.76 12.86 0.00 0.00 38801.23 8107.05 30874.74 00:14:18.755 =================================================================================================================== 00:14:18.755 Total : 3292.76 12.86 0.00 0.00 38801.23 8107.05 30874.74 00:14:18.755 0 00:14:18.755 14:18:58 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:18.755 14:18:58 -- target/tls.sh@45 -- # killprocess 3151928 00:14:18.755 14:18:58 -- common/autotest_common.sh@936 -- # '[' -z 3151928 ']' 00:14:18.755 14:18:58 -- common/autotest_common.sh@940 -- # kill -0 3151928 00:14:18.755 14:18:58 -- common/autotest_common.sh@941 -- # uname 00:14:18.755 14:18:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:18.755 14:18:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3151928 00:14:18.755 14:18:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:18.755 14:18:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:18.755 14:18:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3151928' 00:14:18.755 killing process with pid 3151928 00:14:18.755 14:18:58 -- common/autotest_common.sh@955 -- # kill 3151928 00:14:18.755 Received shutdown signal, test time was about 10.000000 seconds 00:14:18.755 00:14:18.755 Latency(us) 00:14:18.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.755 =================================================================================================================== 00:14:18.755 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:18.755 [2024-04-26 14:18:58.996870] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:18.755 14:18:58 -- common/autotest_common.sh@960 -- # wait 3151928 00:14:18.755 14:18:59 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lKB2XfFYFL 00:14:18.755 14:18:59 -- common/autotest_common.sh@638 -- # local es=0 00:14:18.755 14:18:59 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lKB2XfFYFL 00:14:18.755 14:18:59 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:18.755 14:18:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:18.755 14:18:59 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:18.755 14:18:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:18.755 14:18:59 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lKB2XfFYFL 00:14:18.755 14:18:59 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:18.755 14:18:59 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:18.755 14:18:59 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:18.755 14:18:59 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lKB2XfFYFL' 00:14:18.755 14:18:59 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:18.755 14:18:59 -- target/tls.sh@28 -- # bdevperf_pid=3152932 00:14:18.755 14:18:59 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:18.755 14:18:59 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:18.755 14:18:59 -- target/tls.sh@31 -- # waitforlisten 3152932 /var/tmp/bdevperf.sock 00:14:18.755 14:18:59 -- common/autotest_common.sh@817 -- # '[' -z 3152932 ']' 00:14:18.755 14:18:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:18.755 14:18:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:18.755 14:18:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:18.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:18.755 14:18:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:18.755 14:18:59 -- common/autotest_common.sh@10 -- # set +x 00:14:18.755 [2024-04-26 14:18:59.265920] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:14:18.755 [2024-04-26 14:18:59.266025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3152932 ] 00:14:18.755 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.755 [2024-04-26 14:18:59.326276] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.755 [2024-04-26 14:18:59.444759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.755 14:18:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:18.755 14:18:59 -- common/autotest_common.sh@850 -- # return 0 00:14:18.755 14:18:59 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lKB2XfFYFL 00:14:18.755 [2024-04-26 14:18:59.817639] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:18.755 [2024-04-26 14:18:59.817794] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:18.755 [2024-04-26 14:18:59.824188] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:18.755 [2024-04-26 14:18:59.825034] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c3980 (107): Transport endpoint is not connected 00:14:18.755 [2024-04-26 14:18:59.826024] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c3980 (9): Bad file descriptor 00:14:18.755 [2024-04-26 14:18:59.827024] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:18.755 [2024-04-26 14:18:59.827053] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:18.755 [2024-04-26 14:18:59.827067] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:18.756 request: 00:14:18.756 { 00:14:18.756 "name": "TLSTEST", 00:14:18.756 "trtype": "tcp", 00:14:18.756 "traddr": "10.0.0.2", 00:14:18.756 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:18.756 "adrfam": "ipv4", 00:14:18.756 "trsvcid": "4420", 00:14:18.756 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:18.756 "psk": "/tmp/tmp.lKB2XfFYFL", 00:14:18.756 "method": "bdev_nvme_attach_controller", 00:14:18.756 "req_id": 1 00:14:18.756 } 00:14:18.756 Got JSON-RPC error response 00:14:18.756 response: 00:14:18.756 { 00:14:18.756 "code": -32602, 00:14:18.756 "message": "Invalid parameters" 00:14:18.756 } 00:14:18.756 14:18:59 -- target/tls.sh@36 -- # killprocess 3152932 00:14:18.756 14:18:59 -- common/autotest_common.sh@936 -- # '[' -z 3152932 ']' 00:14:18.756 14:18:59 -- common/autotest_common.sh@940 -- # kill -0 3152932 00:14:18.756 14:18:59 -- common/autotest_common.sh@941 -- # uname 00:14:18.756 14:18:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:18.756 14:18:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3152932 00:14:18.756 14:18:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:18.756 14:18:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:18.756 14:18:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3152932' 00:14:18.756 killing process with pid 3152932 00:14:18.756 14:18:59 -- common/autotest_common.sh@955 -- # kill 3152932 00:14:18.756 Received shutdown signal, test time was about 10.000000 seconds 00:14:18.756 00:14:18.756 Latency(us) 00:14:18.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.756 =================================================================================================================== 00:14:18.756 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:18.756 [2024-04-26 14:18:59.877878] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:18.756 14:18:59 -- common/autotest_common.sh@960 -- # wait 3152932 00:14:18.756 14:19:00 -- target/tls.sh@37 -- # return 1 00:14:18.756 14:19:00 -- common/autotest_common.sh@641 -- # es=1 00:14:18.756 14:19:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:18.756 14:19:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:18.756 14:19:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:18.756 14:19:00 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2fkpFBchYI 00:14:18.756 14:19:00 -- common/autotest_common.sh@638 -- # local es=0 00:14:18.756 14:19:00 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2fkpFBchYI 00:14:18.756 14:19:00 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:18.756 14:19:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:18.756 14:19:00 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:18.756 14:19:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:18.756 14:19:00 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2fkpFBchYI 00:14:18.756 14:19:00 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:18.756 14:19:00 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:18.756 14:19:00 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:18.756 14:19:00 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2fkpFBchYI' 00:14:18.756 14:19:00 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:18.756 14:19:00 -- target/tls.sh@28 -- # bdevperf_pid=3153036 00:14:18.756 14:19:00 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:18.756 14:19:00 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:18.756 14:19:00 -- target/tls.sh@31 -- # waitforlisten 3153036 /var/tmp/bdevperf.sock 00:14:18.756 14:19:00 -- common/autotest_common.sh@817 -- # '[' -z 3153036 ']' 00:14:18.756 14:19:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:18.756 14:19:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:18.756 14:19:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:18.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:18.756 14:19:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:18.756 14:19:00 -- common/autotest_common.sh@10 -- # set +x 00:14:18.756 [2024-04-26 14:19:00.136742] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:14:18.756 [2024-04-26 14:19:00.136842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3153036 ] 00:14:18.756 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.756 [2024-04-26 14:19:00.198802] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.756 [2024-04-26 14:19:00.313788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.015 14:19:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:19.015 14:19:00 -- common/autotest_common.sh@850 -- # return 0 00:14:19.015 14:19:00 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.2fkpFBchYI 00:14:19.273 [2024-04-26 14:19:00.680208] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:19.273 [2024-04-26 14:19:00.680327] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:19.273 [2024-04-26 14:19:00.686246] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:19.273 [2024-04-26 14:19:00.686283] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:19.273 [2024-04-26 14:19:00.686326] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:19.273 [2024-04-26 14:19:00.686483] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa66980 (107): Transport endpoint is not connected 00:14:19.273 [2024-04-26 14:19:00.687477] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa66980 (9): Bad file descriptor 00:14:19.273 [2024-04-26 14:19:00.688479] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:19.273 [2024-04-26 14:19:00.688504] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:19.273 [2024-04-26 14:19:00.688519] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:19.273 request: 00:14:19.273 { 00:14:19.273 "name": "TLSTEST", 00:14:19.273 "trtype": "tcp", 00:14:19.273 "traddr": "10.0.0.2", 00:14:19.273 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:19.273 "adrfam": "ipv4", 00:14:19.273 "trsvcid": "4420", 00:14:19.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.273 "psk": "/tmp/tmp.2fkpFBchYI", 00:14:19.273 "method": "bdev_nvme_attach_controller", 00:14:19.273 "req_id": 1 00:14:19.273 } 00:14:19.273 Got JSON-RPC error response 00:14:19.273 response: 00:14:19.273 { 00:14:19.273 "code": -32602, 00:14:19.273 "message": "Invalid parameters" 00:14:19.273 } 00:14:19.273 14:19:00 -- target/tls.sh@36 -- # killprocess 3153036 00:14:19.273 14:19:00 -- common/autotest_common.sh@936 -- # '[' -z 3153036 ']' 00:14:19.273 14:19:00 -- common/autotest_common.sh@940 -- # kill -0 3153036 00:14:19.273 14:19:00 -- common/autotest_common.sh@941 -- # uname 00:14:19.273 14:19:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:19.273 14:19:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3153036 00:14:19.273 14:19:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:19.273 14:19:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:19.273 14:19:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3153036' 00:14:19.273 killing process with pid 3153036 00:14:19.273 14:19:00 -- common/autotest_common.sh@955 -- # kill 3153036 00:14:19.273 Received shutdown signal, test time was about 10.000000 seconds 00:14:19.273 00:14:19.273 Latency(us) 00:14:19.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.273 =================================================================================================================== 00:14:19.273 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:19.273 [2024-04-26 14:19:00.735393] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:19.273 14:19:00 -- common/autotest_common.sh@960 -- # wait 3153036 00:14:19.532 14:19:00 -- target/tls.sh@37 -- # return 1 00:14:19.532 14:19:00 -- common/autotest_common.sh@641 -- # es=1 00:14:19.532 14:19:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:19.532 14:19:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:19.532 14:19:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:19.532 14:19:00 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2fkpFBchYI 00:14:19.532 14:19:00 -- common/autotest_common.sh@638 -- # local es=0 00:14:19.532 14:19:00 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2fkpFBchYI 00:14:19.532 14:19:00 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:19.532 14:19:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:19.532 14:19:00 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:19.532 14:19:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:19.532 14:19:00 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2fkpFBchYI 00:14:19.532 14:19:00 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:19.532 14:19:00 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:19.532 14:19:00 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:19.532 14:19:00 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2fkpFBchYI' 00:14:19.532 14:19:00 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:19.532 14:19:00 -- target/tls.sh@28 -- # bdevperf_pid=3153140 00:14:19.532 14:19:00 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:19.532 14:19:00 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:19.532 14:19:00 -- target/tls.sh@31 -- # waitforlisten 3153140 /var/tmp/bdevperf.sock 00:14:19.532 14:19:00 -- common/autotest_common.sh@817 -- # '[' -z 3153140 ']' 00:14:19.532 14:19:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:19.532 14:19:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:19.532 14:19:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:19.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:19.532 14:19:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:19.532 14:19:00 -- common/autotest_common.sh@10 -- # set +x 00:14:19.533 [2024-04-26 14:19:00.993898] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:14:19.533 [2024-04-26 14:19:00.993995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3153140 ] 00:14:19.533 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.533 [2024-04-26 14:19:01.055060] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.790 [2024-04-26 14:19:01.173551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.790 14:19:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:19.790 14:19:01 -- common/autotest_common.sh@850 -- # return 0 00:14:19.790 14:19:01 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2fkpFBchYI 00:14:20.048 [2024-04-26 14:19:01.549858] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:20.048 [2024-04-26 14:19:01.549995] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:20.048 [2024-04-26 14:19:01.558463] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:20.048 [2024-04-26 14:19:01.558500] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:20.048 [2024-04-26 14:19:01.558541] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:20.048 [2024-04-26 14:19:01.559105] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241d980 (107): Transport endpoint is not connected 00:14:20.048 [2024-04-26 14:19:01.560097] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241d980 (9): Bad file descriptor 00:14:20.048 [2024-04-26 14:19:01.561096] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:20.048 [2024-04-26 14:19:01.561118] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:20.048 [2024-04-26 14:19:01.561133] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:20.048 request: 00:14:20.048 { 00:14:20.048 "name": "TLSTEST", 00:14:20.048 "trtype": "tcp", 00:14:20.048 "traddr": "10.0.0.2", 00:14:20.048 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:20.048 "adrfam": "ipv4", 00:14:20.048 "trsvcid": "4420", 00:14:20.048 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:20.048 "psk": "/tmp/tmp.2fkpFBchYI", 00:14:20.048 "method": "bdev_nvme_attach_controller", 00:14:20.048 "req_id": 1 00:14:20.048 } 00:14:20.048 Got JSON-RPC error response 00:14:20.048 response: 00:14:20.048 { 00:14:20.048 "code": -32602, 00:14:20.048 "message": "Invalid parameters" 00:14:20.048 } 00:14:20.048 14:19:01 -- target/tls.sh@36 -- # killprocess 3153140 00:14:20.048 14:19:01 -- common/autotest_common.sh@936 -- # '[' -z 3153140 ']' 00:14:20.048 14:19:01 -- common/autotest_common.sh@940 -- # kill -0 3153140 00:14:20.048 14:19:01 -- common/autotest_common.sh@941 -- # uname 00:14:20.048 14:19:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:20.048 14:19:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3153140 00:14:20.048 14:19:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:20.048 14:19:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:20.048 14:19:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3153140' 00:14:20.048 killing process with pid 3153140 00:14:20.048 14:19:01 -- common/autotest_common.sh@955 -- # kill 3153140 00:14:20.048 Received shutdown signal, test time was about 10.000000 seconds 00:14:20.048 00:14:20.048 Latency(us) 00:14:20.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.048 =================================================================================================================== 00:14:20.048 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:20.048 [2024-04-26 14:19:01.604545] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:20.048 14:19:01 -- common/autotest_common.sh@960 -- # wait 3153140 00:14:20.306 14:19:01 -- target/tls.sh@37 -- # return 1 00:14:20.306 14:19:01 -- common/autotest_common.sh@641 -- # es=1 00:14:20.306 14:19:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:20.306 14:19:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:20.306 14:19:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:20.306 14:19:01 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:20.306 14:19:01 -- common/autotest_common.sh@638 -- # local es=0 00:14:20.306 14:19:01 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:20.306 14:19:01 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:20.306 14:19:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:20.306 14:19:01 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:20.306 14:19:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:20.306 14:19:01 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:20.306 14:19:01 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:20.306 14:19:01 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:20.306 14:19:01 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:20.306 14:19:01 -- target/tls.sh@23 -- # psk= 00:14:20.306 14:19:01 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:20.306 14:19:01 -- target/tls.sh@28 -- # bdevperf_pid=3153242 00:14:20.306 14:19:01 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:20.306 14:19:01 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:20.306 14:19:01 -- target/tls.sh@31 -- # waitforlisten 3153242 /var/tmp/bdevperf.sock 00:14:20.306 14:19:01 -- common/autotest_common.sh@817 -- # '[' -z 3153242 ']' 00:14:20.306 14:19:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:20.306 14:19:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:20.306 14:19:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:20.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:20.306 14:19:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:20.306 14:19:01 -- common/autotest_common.sh@10 -- # set +x 00:14:20.306 [2024-04-26 14:19:01.863139] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:14:20.306 [2024-04-26 14:19:01.863241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3153242 ] 00:14:20.563 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.563 [2024-04-26 14:19:01.923606] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.563 [2024-04-26 14:19:02.038345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.882 14:19:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:20.882 14:19:02 -- common/autotest_common.sh@850 -- # return 0 00:14:20.882 14:19:02 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:20.882 [2024-04-26 14:19:02.420820] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:20.882 [2024-04-26 14:19:02.422284] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1537480 (9): Bad file descriptor 00:14:20.882 [2024-04-26 14:19:02.423263] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:20.882 [2024-04-26 14:19:02.423294] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:20.882 [2024-04-26 14:19:02.423309] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:20.882 request: 00:14:20.882 { 00:14:20.882 "name": "TLSTEST", 00:14:20.882 "trtype": "tcp", 00:14:20.882 "traddr": "10.0.0.2", 00:14:20.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:20.882 "adrfam": "ipv4", 00:14:20.882 "trsvcid": "4420", 00:14:20.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.882 "method": "bdev_nvme_attach_controller", 00:14:20.882 "req_id": 1 00:14:20.882 } 00:14:20.882 Got JSON-RPC error response 00:14:20.882 response: 00:14:20.882 { 00:14:20.882 "code": -32602, 00:14:20.882 "message": "Invalid parameters" 00:14:20.882 } 00:14:20.882 14:19:02 -- target/tls.sh@36 -- # killprocess 3153242 00:14:20.882 14:19:02 -- common/autotest_common.sh@936 -- # '[' -z 3153242 ']' 00:14:20.882 14:19:02 -- common/autotest_common.sh@940 -- # kill -0 3153242 00:14:20.882 14:19:02 -- common/autotest_common.sh@941 -- # uname 00:14:20.882 14:19:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:20.882 14:19:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3153242 00:14:21.139 14:19:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:21.139 14:19:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:21.139 14:19:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3153242' 00:14:21.139 killing process with pid 3153242 00:14:21.140 14:19:02 -- common/autotest_common.sh@955 -- # kill 3153242 00:14:21.140 Received shutdown signal, test time was about 10.000000 seconds 00:14:21.140 00:14:21.140 Latency(us) 00:14:21.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.140 =================================================================================================================== 00:14:21.140 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:21.140 14:19:02 -- common/autotest_common.sh@960 -- # wait 3153242 00:14:21.140 14:19:02 -- target/tls.sh@37 -- # return 1 00:14:21.140 14:19:02 -- common/autotest_common.sh@641 -- # es=1 00:14:21.140 14:19:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:21.140 14:19:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:21.140 14:19:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:21.140 14:19:02 -- target/tls.sh@158 -- # killprocess 3150471 00:14:21.140 14:19:02 -- common/autotest_common.sh@936 -- # '[' -z 3150471 ']' 00:14:21.140 14:19:02 -- common/autotest_common.sh@940 -- # kill -0 3150471 00:14:21.140 14:19:02 -- common/autotest_common.sh@941 -- # uname 00:14:21.140 14:19:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:21.140 14:19:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3150471 00:14:21.140 14:19:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:21.140 14:19:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:21.140 14:19:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3150471' 00:14:21.140 killing process with pid 3150471 00:14:21.140 14:19:02 -- common/autotest_common.sh@955 -- # kill 3150471 00:14:21.140 [2024-04-26 14:19:02.703817] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:21.140 14:19:02 -- common/autotest_common.sh@960 -- # wait 3150471 00:14:21.398 14:19:02 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:21.398 14:19:02 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:21.398 14:19:02 -- nvmf/common.sh@691 -- # local prefix key digest 00:14:21.398 14:19:02 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:14:21.398 14:19:02 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:21.398 14:19:02 -- nvmf/common.sh@693 -- # digest=2 00:14:21.398 14:19:02 -- nvmf/common.sh@694 -- # python - 00:14:21.656 14:19:02 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:21.656 14:19:02 -- target/tls.sh@160 -- # mktemp 00:14:21.656 14:19:02 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.5IrwQBrtvl 00:14:21.656 14:19:02 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:21.656 14:19:02 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.5IrwQBrtvl 00:14:21.656 14:19:02 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:21.656 14:19:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:21.656 14:19:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:21.656 14:19:02 -- common/autotest_common.sh@10 -- # set +x 00:14:21.656 14:19:02 -- nvmf/common.sh@470 -- # nvmfpid=3153356 00:14:21.656 14:19:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:21.656 14:19:02 -- nvmf/common.sh@471 -- # waitforlisten 3153356 00:14:21.656 14:19:02 -- common/autotest_common.sh@817 -- # '[' -z 3153356 ']' 00:14:21.656 14:19:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.656 14:19:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:21.656 14:19:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.656 14:19:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:21.656 14:19:02 -- common/autotest_common.sh@10 -- # set +x 00:14:21.656 [2024-04-26 14:19:03.039447] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:14:21.656 [2024-04-26 14:19:03.039550] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.656 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.656 [2024-04-26 14:19:03.104466] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.656 [2024-04-26 14:19:03.218746] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.656 [2024-04-26 14:19:03.218808] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.656 [2024-04-26 14:19:03.218824] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.656 [2024-04-26 14:19:03.218837] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.656 [2024-04-26 14:19:03.218849] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.656 [2024-04-26 14:19:03.218880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.913 14:19:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:21.913 14:19:03 -- common/autotest_common.sh@850 -- # return 0 00:14:21.913 14:19:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:21.913 14:19:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:21.913 14:19:03 -- common/autotest_common.sh@10 -- # set +x 00:14:21.913 14:19:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.913 14:19:03 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.5IrwQBrtvl 00:14:21.913 14:19:03 -- target/tls.sh@49 -- # local key=/tmp/tmp.5IrwQBrtvl 00:14:21.913 14:19:03 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:22.169 [2024-04-26 14:19:03.613719] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.170 14:19:03 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:22.427 14:19:03 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:22.684 [2024-04-26 14:19:04.191222] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:22.684 [2024-04-26 14:19:04.191448] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.684 14:19:04 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:22.942 malloc0 00:14:23.200 14:19:04 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:23.458 14:19:04 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5IrwQBrtvl 00:14:23.717 [2024-04-26 14:19:05.076202] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:23.717 14:19:05 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5IrwQBrtvl 00:14:23.717 14:19:05 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:23.717 14:19:05 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:23.717 14:19:05 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:23.717 14:19:05 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5IrwQBrtvl' 00:14:23.717 14:19:05 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:23.717 14:19:05 -- target/tls.sh@28 -- # bdevperf_pid=3153581 00:14:23.717 14:19:05 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:23.717 14:19:05 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:23.717 14:19:05 -- target/tls.sh@31 -- # waitforlisten 3153581 /var/tmp/bdevperf.sock 00:14:23.717 14:19:05 -- common/autotest_common.sh@817 -- # '[' -z 3153581 ']' 00:14:23.717 14:19:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:23.717 14:19:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:23.717 14:19:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:23.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:23.717 14:19:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:23.717 14:19:05 -- common/autotest_common.sh@10 -- # set +x 00:14:23.717 [2024-04-26 14:19:05.145235] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:14:23.717 [2024-04-26 14:19:05.145328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3153581 ] 00:14:23.717 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.717 [2024-04-26 14:19:05.205627] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.975 [2024-04-26 14:19:05.323499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.975 14:19:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:23.975 14:19:05 -- common/autotest_common.sh@850 -- # return 0 00:14:23.975 14:19:05 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5IrwQBrtvl 00:14:24.233 [2024-04-26 14:19:05.699110] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:24.233 [2024-04-26 14:19:05.699240] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:24.233 TLSTESTn1 00:14:24.233 14:19:05 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:24.491 Running I/O for 10 seconds... 00:14:34.457 00:14:34.457 Latency(us) 00:14:34.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.457 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:34.457 Verification LBA range: start 0x0 length 0x2000 00:14:34.457 TLSTESTn1 : 10.03 3233.25 12.63 0.00 0.00 39511.64 9175.04 40583.77 00:14:34.457 =================================================================================================================== 00:14:34.457 Total : 3233.25 12.63 0.00 0.00 39511.64 9175.04 40583.77 00:14:34.457 0 00:14:34.457 14:19:15 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:34.457 14:19:15 -- target/tls.sh@45 -- # killprocess 3153581 00:14:34.457 14:19:15 -- common/autotest_common.sh@936 -- # '[' -z 3153581 ']' 00:14:34.457 14:19:15 -- common/autotest_common.sh@940 -- # kill -0 3153581 00:14:34.457 14:19:15 -- common/autotest_common.sh@941 -- # uname 00:14:34.457 14:19:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:34.457 14:19:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3153581 00:14:34.457 14:19:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:34.457 14:19:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:34.457 14:19:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3153581' 00:14:34.457 killing process with pid 3153581 00:14:34.457 14:19:15 -- common/autotest_common.sh@955 -- # kill 3153581 00:14:34.457 Received shutdown signal, test time was about 10.000000 seconds 00:14:34.457 00:14:34.457 Latency(us) 00:14:34.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.457 =================================================================================================================== 00:14:34.457 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:34.457 [2024-04-26 14:19:15.998260] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:34.457 14:19:15 -- common/autotest_common.sh@960 -- # wait 3153581 00:14:34.716 14:19:16 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.5IrwQBrtvl 00:14:34.716 14:19:16 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5IrwQBrtvl 00:14:34.716 14:19:16 -- common/autotest_common.sh@638 -- # local es=0 00:14:34.716 14:19:16 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5IrwQBrtvl 00:14:34.716 14:19:16 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:14:34.716 14:19:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:34.716 14:19:16 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:14:34.716 14:19:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:34.716 14:19:16 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5IrwQBrtvl 00:14:34.716 14:19:16 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:34.716 14:19:16 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:34.716 14:19:16 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:34.716 14:19:16 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5IrwQBrtvl' 00:14:34.716 14:19:16 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:34.716 14:19:16 -- target/tls.sh@28 -- # bdevperf_pid=3154586 00:14:34.716 14:19:16 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:34.716 14:19:16 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:34.716 14:19:16 -- target/tls.sh@31 -- # waitforlisten 3154586 /var/tmp/bdevperf.sock 00:14:34.716 14:19:16 -- common/autotest_common.sh@817 -- # '[' -z 3154586 ']' 00:14:34.716 14:19:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:34.716 14:19:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:34.716 14:19:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:34.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:34.716 14:19:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:34.716 14:19:16 -- common/autotest_common.sh@10 -- # set +x 00:14:34.716 [2024-04-26 14:19:16.266980] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:14:34.716 [2024-04-26 14:19:16.267079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3154586 ] 00:14:34.974 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.974 [2024-04-26 14:19:16.327862] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.974 [2024-04-26 14:19:16.445354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.232 14:19:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:35.232 14:19:16 -- common/autotest_common.sh@850 -- # return 0 00:14:35.232 14:19:16 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5IrwQBrtvl 00:14:35.490 [2024-04-26 14:19:16.821676] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:35.490 [2024-04-26 14:19:16.821764] bdev_nvme.c:6067:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:35.490 [2024-04-26 14:19:16.821780] bdev_nvme.c:6176:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.5IrwQBrtvl 00:14:35.490 request: 00:14:35.490 { 00:14:35.490 "name": "TLSTEST", 00:14:35.490 "trtype": "tcp", 00:14:35.490 "traddr": "10.0.0.2", 00:14:35.490 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:35.490 "adrfam": "ipv4", 00:14:35.490 "trsvcid": "4420", 00:14:35.490 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.490 "psk": "/tmp/tmp.5IrwQBrtvl", 00:14:35.490 "method": "bdev_nvme_attach_controller", 00:14:35.490 "req_id": 1 00:14:35.490 } 00:14:35.490 Got JSON-RPC error response 00:14:35.490 response: 00:14:35.490 { 00:14:35.490 "code": -1, 00:14:35.490 "message": "Operation not permitted" 00:14:35.490 } 00:14:35.490 14:19:16 -- target/tls.sh@36 -- # killprocess 3154586 00:14:35.490 14:19:16 -- common/autotest_common.sh@936 -- # '[' -z 3154586 ']' 00:14:35.490 14:19:16 -- common/autotest_common.sh@940 -- # kill -0 3154586 00:14:35.490 14:19:16 -- common/autotest_common.sh@941 -- # uname 00:14:35.490 14:19:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:35.490 14:19:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3154586 00:14:35.490 14:19:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:35.490 14:19:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:35.490 14:19:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3154586' 00:14:35.490 killing process with pid 3154586 00:14:35.490 14:19:16 -- common/autotest_common.sh@955 -- # kill 3154586 00:14:35.490 Received shutdown signal, test time was about 10.000000 seconds 00:14:35.490 00:14:35.490 Latency(us) 00:14:35.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.490 =================================================================================================================== 00:14:35.490 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:35.490 14:19:16 -- common/autotest_common.sh@960 -- # wait 3154586 00:14:35.749 14:19:17 -- target/tls.sh@37 -- # return 1 00:14:35.749 14:19:17 -- common/autotest_common.sh@641 -- # es=1 00:14:35.749 14:19:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:35.749 14:19:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:35.749 14:19:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:35.749 14:19:17 -- target/tls.sh@174 -- # killprocess 3153356 00:14:35.749 14:19:17 -- common/autotest_common.sh@936 -- # '[' -z 3153356 ']' 00:14:35.749 14:19:17 -- common/autotest_common.sh@940 -- # kill -0 3153356 00:14:35.749 14:19:17 -- common/autotest_common.sh@941 -- # uname 00:14:35.749 14:19:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:35.749 14:19:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3153356 00:14:35.749 14:19:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:35.749 14:19:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:35.749 14:19:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3153356' 00:14:35.749 killing process with pid 3153356 00:14:35.749 14:19:17 -- common/autotest_common.sh@955 -- # kill 3153356 00:14:35.749 [2024-04-26 14:19:17.101837] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:35.749 14:19:17 -- common/autotest_common.sh@960 -- # wait 3153356 00:14:36.007 14:19:17 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:36.007 14:19:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:36.007 14:19:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:36.007 14:19:17 -- common/autotest_common.sh@10 -- # set +x 00:14:36.007 14:19:17 -- nvmf/common.sh@470 -- # nvmfpid=3154706 00:14:36.007 14:19:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:36.007 14:19:17 -- nvmf/common.sh@471 -- # waitforlisten 3154706 00:14:36.007 14:19:17 -- common/autotest_common.sh@817 -- # '[' -z 3154706 ']' 00:14:36.007 14:19:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.007 14:19:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:36.007 14:19:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.007 14:19:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:36.007 14:19:17 -- common/autotest_common.sh@10 -- # set +x 00:14:36.007 [2024-04-26 14:19:17.376339] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:14:36.007 [2024-04-26 14:19:17.376424] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.007 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.007 [2024-04-26 14:19:17.440308] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.007 [2024-04-26 14:19:17.554374] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.007 [2024-04-26 14:19:17.554439] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.007 [2024-04-26 14:19:17.554454] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.007 [2024-04-26 14:19:17.554468] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.007 [2024-04-26 14:19:17.554480] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.007 [2024-04-26 14:19:17.554518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.265 14:19:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:36.265 14:19:17 -- common/autotest_common.sh@850 -- # return 0 00:14:36.265 14:19:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:36.265 14:19:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:36.265 14:19:17 -- common/autotest_common.sh@10 -- # set +x 00:14:36.265 14:19:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.265 14:19:17 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.5IrwQBrtvl 00:14:36.265 14:19:17 -- common/autotest_common.sh@638 -- # local es=0 00:14:36.265 14:19:17 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.5IrwQBrtvl 00:14:36.265 14:19:17 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:14:36.265 14:19:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:36.265 14:19:17 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:14:36.265 14:19:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:36.265 14:19:17 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.5IrwQBrtvl 00:14:36.265 14:19:17 -- target/tls.sh@49 -- # local key=/tmp/tmp.5IrwQBrtvl 00:14:36.265 14:19:17 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:36.524 [2024-04-26 14:19:17.959448] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.524 14:19:17 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:36.781 14:19:18 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:37.039 [2024-04-26 14:19:18.545046] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:37.039 [2024-04-26 14:19:18.545292] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.039 14:19:18 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:37.298 malloc0 00:14:37.298 14:19:18 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:37.864 14:19:19 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5IrwQBrtvl 00:14:37.864 [2024-04-26 14:19:19.377809] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:37.864 [2024-04-26 14:19:19.377850] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:37.864 [2024-04-26 14:19:19.377885] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:14:37.864 request: 00:14:37.864 { 00:14:37.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.864 "host": "nqn.2016-06.io.spdk:host1", 00:14:37.864 "psk": "/tmp/tmp.5IrwQBrtvl", 00:14:37.864 "method": "nvmf_subsystem_add_host", 00:14:37.864 "req_id": 1 00:14:37.864 } 00:14:37.864 Got JSON-RPC error response 00:14:37.864 response: 00:14:37.864 { 00:14:37.864 "code": -32603, 00:14:37.864 "message": "Internal error" 00:14:37.864 } 00:14:37.864 14:19:19 -- common/autotest_common.sh@641 -- # es=1 00:14:37.864 14:19:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:37.864 14:19:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:37.864 14:19:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:37.864 14:19:19 -- target/tls.sh@180 -- # killprocess 3154706 00:14:37.864 14:19:19 -- common/autotest_common.sh@936 -- # '[' -z 3154706 ']' 00:14:37.864 14:19:19 -- common/autotest_common.sh@940 -- # kill -0 3154706 00:14:37.864 14:19:19 -- common/autotest_common.sh@941 -- # uname 00:14:37.864 14:19:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:37.864 14:19:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3154706 00:14:37.864 14:19:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:37.864 14:19:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:37.864 14:19:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3154706' 00:14:37.864 killing process with pid 3154706 00:14:37.864 14:19:19 -- common/autotest_common.sh@955 -- # kill 3154706 00:14:37.864 14:19:19 -- common/autotest_common.sh@960 -- # wait 3154706 00:14:38.122 14:19:19 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.5IrwQBrtvl 00:14:38.122 14:19:19 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:38.122 14:19:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:38.122 14:19:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:38.122 14:19:19 -- common/autotest_common.sh@10 -- # set +x 00:14:38.122 14:19:19 -- nvmf/common.sh@470 -- # nvmfpid=3154937 00:14:38.122 14:19:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:38.122 14:19:19 -- nvmf/common.sh@471 -- # waitforlisten 3154937 00:14:38.122 14:19:19 -- common/autotest_common.sh@817 -- # '[' -z 3154937 ']' 00:14:38.122 14:19:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.122 14:19:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:38.122 14:19:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.122 14:19:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:38.122 14:19:19 -- common/autotest_common.sh@10 -- # set +x 00:14:38.381 [2024-04-26 14:19:19.711973] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:14:38.381 [2024-04-26 14:19:19.712075] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.381 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.381 [2024-04-26 14:19:19.791283] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.381 [2024-04-26 14:19:19.943146] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.381 [2024-04-26 14:19:19.943220] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.381 [2024-04-26 14:19:19.943263] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.381 [2024-04-26 14:19:19.943292] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.381 [2024-04-26 14:19:19.943316] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.381 [2024-04-26 14:19:19.943385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.639 14:19:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:38.639 14:19:20 -- common/autotest_common.sh@850 -- # return 0 00:14:38.639 14:19:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:38.639 14:19:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:38.639 14:19:20 -- common/autotest_common.sh@10 -- # set +x 00:14:38.639 14:19:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.639 14:19:20 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.5IrwQBrtvl 00:14:38.639 14:19:20 -- target/tls.sh@49 -- # local key=/tmp/tmp.5IrwQBrtvl 00:14:38.639 14:19:20 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:38.897 [2024-04-26 14:19:20.358311] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.897 14:19:20 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:39.156 14:19:20 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:39.414 [2024-04-26 14:19:20.939912] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:39.414 [2024-04-26 14:19:20.940152] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.414 14:19:20 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:39.672 malloc0 00:14:39.930 14:19:21 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:40.187 14:19:21 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5IrwQBrtvl 00:14:40.445 [2024-04-26 14:19:21.820894] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:40.445 14:19:21 -- target/tls.sh@188 -- # bdevperf_pid=3155161 00:14:40.445 14:19:21 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:40.445 14:19:21 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:40.445 14:19:21 -- target/tls.sh@191 -- # waitforlisten 3155161 /var/tmp/bdevperf.sock 00:14:40.445 14:19:21 -- common/autotest_common.sh@817 -- # '[' -z 3155161 ']' 00:14:40.445 14:19:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:40.445 14:19:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:40.446 14:19:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:40.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:40.446 14:19:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:40.446 14:19:21 -- common/autotest_common.sh@10 -- # set +x 00:14:40.446 [2024-04-26 14:19:21.885955] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:14:40.446 [2024-04-26 14:19:21.886038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3155161 ] 00:14:40.446 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.446 [2024-04-26 14:19:21.944675] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.704 [2024-04-26 14:19:22.059352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.704 14:19:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:40.704 14:19:22 -- common/autotest_common.sh@850 -- # return 0 00:14:40.704 14:19:22 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5IrwQBrtvl 00:14:40.961 [2024-04-26 14:19:22.431127] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:40.961 [2024-04-26 14:19:22.431240] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:40.961 TLSTESTn1 00:14:40.961 14:19:22 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:14:41.527 14:19:22 -- target/tls.sh@196 -- # tgtconf='{ 00:14:41.527 "subsystems": [ 00:14:41.527 { 00:14:41.527 "subsystem": "keyring", 00:14:41.527 "config": [] 00:14:41.527 }, 00:14:41.527 { 00:14:41.527 "subsystem": "iobuf", 00:14:41.527 "config": [ 00:14:41.527 { 00:14:41.527 "method": "iobuf_set_options", 00:14:41.527 "params": { 00:14:41.527 "small_pool_count": 8192, 00:14:41.527 "large_pool_count": 1024, 00:14:41.527 "small_bufsize": 8192, 00:14:41.527 "large_bufsize": 135168 00:14:41.527 } 00:14:41.527 } 00:14:41.527 ] 00:14:41.527 }, 00:14:41.527 { 00:14:41.527 "subsystem": "sock", 00:14:41.527 "config": [ 00:14:41.527 { 00:14:41.527 "method": "sock_impl_set_options", 00:14:41.527 "params": { 00:14:41.527 "impl_name": "posix", 00:14:41.527 "recv_buf_size": 2097152, 00:14:41.527 "send_buf_size": 2097152, 00:14:41.527 "enable_recv_pipe": true, 00:14:41.527 "enable_quickack": false, 00:14:41.527 "enable_placement_id": 0, 00:14:41.527 "enable_zerocopy_send_server": true, 00:14:41.527 "enable_zerocopy_send_client": false, 00:14:41.527 "zerocopy_threshold": 0, 00:14:41.527 "tls_version": 0, 00:14:41.527 "enable_ktls": false 00:14:41.527 } 00:14:41.527 }, 00:14:41.527 { 00:14:41.527 "method": "sock_impl_set_options", 00:14:41.527 "params": { 00:14:41.527 "impl_name": "ssl", 00:14:41.527 "recv_buf_size": 4096, 00:14:41.527 "send_buf_size": 4096, 00:14:41.527 "enable_recv_pipe": true, 00:14:41.527 "enable_quickack": false, 00:14:41.527 "enable_placement_id": 0, 00:14:41.527 "enable_zerocopy_send_server": true, 00:14:41.527 "enable_zerocopy_send_client": false, 00:14:41.527 "zerocopy_threshold": 0, 00:14:41.527 "tls_version": 0, 00:14:41.527 "enable_ktls": false 00:14:41.527 } 00:14:41.527 } 00:14:41.527 ] 00:14:41.527 }, 00:14:41.527 { 00:14:41.527 "subsystem": "vmd", 00:14:41.527 "config": [] 00:14:41.527 }, 00:14:41.527 { 00:14:41.527 "subsystem": "accel", 00:14:41.527 "config": [ 00:14:41.527 { 00:14:41.527 "method": "accel_set_options", 00:14:41.527 "params": { 00:14:41.527 "small_cache_size": 128, 00:14:41.527 "large_cache_size": 16, 00:14:41.527 "task_count": 2048, 00:14:41.527 "sequence_count": 2048, 00:14:41.527 "buf_count": 2048 00:14:41.527 } 00:14:41.527 } 00:14:41.527 ] 00:14:41.527 }, 00:14:41.527 { 00:14:41.527 "subsystem": "bdev", 00:14:41.527 "config": [ 00:14:41.527 { 00:14:41.527 "method": "bdev_set_options", 00:14:41.527 "params": { 00:14:41.527 "bdev_io_pool_size": 65535, 00:14:41.527 "bdev_io_cache_size": 256, 00:14:41.527 "bdev_auto_examine": true, 00:14:41.527 "iobuf_small_cache_size": 128, 00:14:41.527 "iobuf_large_cache_size": 16 00:14:41.527 } 00:14:41.527 }, 00:14:41.527 { 00:14:41.527 "method": "bdev_raid_set_options", 00:14:41.527 "params": { 00:14:41.527 "process_window_size_kb": 1024 00:14:41.527 } 00:14:41.527 }, 00:14:41.527 { 00:14:41.527 "method": "bdev_iscsi_set_options", 00:14:41.527 "params": { 00:14:41.527 "timeout_sec": 30 00:14:41.527 } 00:14:41.527 }, 00:14:41.527 { 00:14:41.527 "method": "bdev_nvme_set_options", 00:14:41.527 "params": { 00:14:41.527 "action_on_timeout": "none", 00:14:41.527 "timeout_us": 0, 00:14:41.527 "timeout_admin_us": 0, 00:14:41.527 "keep_alive_timeout_ms": 10000, 00:14:41.527 "arbitration_burst": 0, 00:14:41.527 "low_priority_weight": 0, 00:14:41.527 "medium_priority_weight": 0, 00:14:41.527 "high_priority_weight": 0, 00:14:41.527 "nvme_adminq_poll_period_us": 10000, 00:14:41.527 "nvme_ioq_poll_period_us": 0, 00:14:41.527 "io_queue_requests": 0, 00:14:41.527 "delay_cmd_submit": true, 00:14:41.527 "transport_retry_count": 4, 00:14:41.527 "bdev_retry_count": 3, 00:14:41.527 "transport_ack_timeout": 0, 00:14:41.527 "ctrlr_loss_timeout_sec": 0, 00:14:41.527 "reconnect_delay_sec": 0, 00:14:41.527 "fast_io_fail_timeout_sec": 0, 00:14:41.527 "disable_auto_failback": false, 00:14:41.527 "generate_uuids": false, 00:14:41.527 "transport_tos": 0, 00:14:41.527 "nvme_error_stat": false, 00:14:41.527 "rdma_srq_size": 0, 00:14:41.527 "io_path_stat": false, 00:14:41.527 "allow_accel_sequence": false, 00:14:41.528 "rdma_max_cq_size": 0, 00:14:41.528 "rdma_cm_event_timeout_ms": 0, 00:14:41.528 "dhchap_digests": [ 00:14:41.528 "sha256", 00:14:41.528 "sha384", 00:14:41.528 "sha512" 00:14:41.528 ], 00:14:41.528 "dhchap_dhgroups": [ 00:14:41.528 "null", 00:14:41.528 "ffdhe2048", 00:14:41.528 "ffdhe3072", 00:14:41.528 "ffdhe4096", 00:14:41.528 "ffdhe6144", 00:14:41.528 "ffdhe8192" 00:14:41.528 ] 00:14:41.528 } 00:14:41.528 }, 00:14:41.528 { 00:14:41.528 "method": "bdev_nvme_set_hotplug", 00:14:41.528 "params": { 00:14:41.528 "period_us": 100000, 00:14:41.528 "enable": false 00:14:41.528 } 00:14:41.528 }, 00:14:41.528 { 00:14:41.528 "method": "bdev_malloc_create", 00:14:41.528 "params": { 00:14:41.528 "name": "malloc0", 00:14:41.528 "num_blocks": 8192, 00:14:41.528 "block_size": 4096, 00:14:41.528 "physical_block_size": 4096, 00:14:41.528 "uuid": "6d67177f-5d93-4a87-aeae-5f5e32048f50", 00:14:41.528 "optimal_io_boundary": 0 00:14:41.528 } 00:14:41.528 }, 00:14:41.528 { 00:14:41.528 "method": "bdev_wait_for_examine" 00:14:41.528 } 00:14:41.528 ] 00:14:41.528 }, 00:14:41.528 { 00:14:41.528 "subsystem": "nbd", 00:14:41.528 "config": [] 00:14:41.528 }, 00:14:41.528 { 00:14:41.528 "subsystem": "scheduler", 00:14:41.528 "config": [ 00:14:41.528 { 00:14:41.528 "method": "framework_set_scheduler", 00:14:41.528 "params": { 00:14:41.528 "name": "static" 00:14:41.528 } 00:14:41.528 } 00:14:41.528 ] 00:14:41.528 }, 00:14:41.528 { 00:14:41.528 "subsystem": "nvmf", 00:14:41.528 "config": [ 00:14:41.528 { 00:14:41.528 "method": "nvmf_set_config", 00:14:41.528 "params": { 00:14:41.528 "discovery_filter": "match_any", 00:14:41.528 "admin_cmd_passthru": { 00:14:41.528 "identify_ctrlr": false 00:14:41.528 } 00:14:41.528 } 00:14:41.528 }, 00:14:41.528 { 00:14:41.528 "method": "nvmf_set_max_subsystems", 00:14:41.528 "params": { 00:14:41.528 "max_subsystems": 1024 00:14:41.528 } 00:14:41.528 }, 00:14:41.528 { 00:14:41.528 "method": "nvmf_set_crdt", 00:14:41.528 "params": { 00:14:41.528 "crdt1": 0, 00:14:41.528 "crdt2": 0, 00:14:41.528 "crdt3": 0 00:14:41.528 } 00:14:41.528 }, 00:14:41.528 { 00:14:41.528 "method": "nvmf_create_transport", 00:14:41.528 "params": { 00:14:41.528 "trtype": "TCP", 00:14:41.528 "max_queue_depth": 128, 00:14:41.528 "max_io_qpairs_per_ctrlr": 127, 00:14:41.528 "in_capsule_data_size": 4096, 00:14:41.528 "max_io_size": 131072, 00:14:41.528 "io_unit_size": 131072, 00:14:41.528 "max_aq_depth": 128, 00:14:41.528 "num_shared_buffers": 511, 00:14:41.528 "buf_cache_size": 4294967295, 00:14:41.528 "dif_insert_or_strip": false, 00:14:41.528 "zcopy": false, 00:14:41.528 "c2h_success": false, 00:14:41.528 "sock_priority": 0, 00:14:41.528 "abort_timeout_sec": 1, 00:14:41.528 "ack_timeout": 0 00:14:41.528 } 00:14:41.528 }, 00:14:41.528 { 00:14:41.528 "method": "nvmf_create_subsystem", 00:14:41.528 "params": { 00:14:41.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.528 "allow_any_host": false, 00:14:41.528 "serial_number": "SPDK00000000000001", 00:14:41.528 "model_number": "SPDK bdev Controller", 00:14:41.528 "max_namespaces": 10, 00:14:41.528 "min_cntlid": 1, 00:14:41.528 "max_cntlid": 65519, 00:14:41.528 "ana_reporting": false 00:14:41.528 } 00:14:41.528 }, 00:14:41.528 { 00:14:41.528 "method": "nvmf_subsystem_add_host", 00:14:41.528 "params": { 00:14:41.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.528 "host": "nqn.2016-06.io.spdk:host1", 00:14:41.528 "psk": "/tmp/tmp.5IrwQBrtvl" 00:14:41.528 } 00:14:41.528 }, 00:14:41.528 { 00:14:41.528 "method": "nvmf_subsystem_add_ns", 00:14:41.528 "params": { 00:14:41.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.528 "namespace": { 00:14:41.528 "nsid": 1, 00:14:41.528 "bdev_name": "malloc0", 00:14:41.528 "nguid": "6D67177F5D934A87AEAE5F5E32048F50", 00:14:41.528 "uuid": "6d67177f-5d93-4a87-aeae-5f5e32048f50", 00:14:41.528 "no_auto_visible": false 00:14:41.528 } 00:14:41.528 } 00:14:41.528 }, 00:14:41.528 { 00:14:41.528 "method": "nvmf_subsystem_add_listener", 00:14:41.528 "params": { 00:14:41.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.528 "listen_address": { 00:14:41.528 "trtype": "TCP", 00:14:41.528 "adrfam": "IPv4", 00:14:41.528 "traddr": "10.0.0.2", 00:14:41.528 "trsvcid": "4420" 00:14:41.528 }, 00:14:41.528 "secure_channel": true 00:14:41.528 } 00:14:41.528 } 00:14:41.528 ] 00:14:41.528 } 00:14:41.528 ] 00:14:41.528 }' 00:14:41.528 14:19:22 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:41.786 14:19:23 -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:41.786 "subsystems": [ 00:14:41.786 { 00:14:41.786 "subsystem": "keyring", 00:14:41.786 "config": [] 00:14:41.786 }, 00:14:41.786 { 00:14:41.786 "subsystem": "iobuf", 00:14:41.786 "config": [ 00:14:41.786 { 00:14:41.786 "method": "iobuf_set_options", 00:14:41.786 "params": { 00:14:41.787 "small_pool_count": 8192, 00:14:41.787 "large_pool_count": 1024, 00:14:41.787 "small_bufsize": 8192, 00:14:41.787 "large_bufsize": 135168 00:14:41.787 } 00:14:41.787 } 00:14:41.787 ] 00:14:41.787 }, 00:14:41.787 { 00:14:41.787 "subsystem": "sock", 00:14:41.787 "config": [ 00:14:41.787 { 00:14:41.787 "method": "sock_impl_set_options", 00:14:41.787 "params": { 00:14:41.787 "impl_name": "posix", 00:14:41.787 "recv_buf_size": 2097152, 00:14:41.787 "send_buf_size": 2097152, 00:14:41.787 "enable_recv_pipe": true, 00:14:41.787 "enable_quickack": false, 00:14:41.787 "enable_placement_id": 0, 00:14:41.787 "enable_zerocopy_send_server": true, 00:14:41.787 "enable_zerocopy_send_client": false, 00:14:41.787 "zerocopy_threshold": 0, 00:14:41.787 "tls_version": 0, 00:14:41.787 "enable_ktls": false 00:14:41.787 } 00:14:41.787 }, 00:14:41.787 { 00:14:41.787 "method": "sock_impl_set_options", 00:14:41.787 "params": { 00:14:41.787 "impl_name": "ssl", 00:14:41.787 "recv_buf_size": 4096, 00:14:41.787 "send_buf_size": 4096, 00:14:41.787 "enable_recv_pipe": true, 00:14:41.787 "enable_quickack": false, 00:14:41.787 "enable_placement_id": 0, 00:14:41.787 "enable_zerocopy_send_server": true, 00:14:41.787 "enable_zerocopy_send_client": false, 00:14:41.787 "zerocopy_threshold": 0, 00:14:41.787 "tls_version": 0, 00:14:41.787 "enable_ktls": false 00:14:41.787 } 00:14:41.787 } 00:14:41.787 ] 00:14:41.787 }, 00:14:41.787 { 00:14:41.787 "subsystem": "vmd", 00:14:41.787 "config": [] 00:14:41.787 }, 00:14:41.787 { 00:14:41.787 "subsystem": "accel", 00:14:41.787 "config": [ 00:14:41.787 { 00:14:41.787 "method": "accel_set_options", 00:14:41.787 "params": { 00:14:41.787 "small_cache_size": 128, 00:14:41.787 "large_cache_size": 16, 00:14:41.787 "task_count": 2048, 00:14:41.787 "sequence_count": 2048, 00:14:41.787 "buf_count": 2048 00:14:41.787 } 00:14:41.787 } 00:14:41.787 ] 00:14:41.787 }, 00:14:41.787 { 00:14:41.787 "subsystem": "bdev", 00:14:41.787 "config": [ 00:14:41.787 { 00:14:41.787 "method": "bdev_set_options", 00:14:41.787 "params": { 00:14:41.787 "bdev_io_pool_size": 65535, 00:14:41.787 "bdev_io_cache_size": 256, 00:14:41.787 "bdev_auto_examine": true, 00:14:41.787 "iobuf_small_cache_size": 128, 00:14:41.787 "iobuf_large_cache_size": 16 00:14:41.787 } 00:14:41.787 }, 00:14:41.787 { 00:14:41.787 "method": "bdev_raid_set_options", 00:14:41.787 "params": { 00:14:41.787 "process_window_size_kb": 1024 00:14:41.787 } 00:14:41.787 }, 00:14:41.787 { 00:14:41.787 "method": "bdev_iscsi_set_options", 00:14:41.787 "params": { 00:14:41.787 "timeout_sec": 30 00:14:41.787 } 00:14:41.787 }, 00:14:41.787 { 00:14:41.787 "method": "bdev_nvme_set_options", 00:14:41.787 "params": { 00:14:41.787 "action_on_timeout": "none", 00:14:41.787 "timeout_us": 0, 00:14:41.787 "timeout_admin_us": 0, 00:14:41.787 "keep_alive_timeout_ms": 10000, 00:14:41.787 "arbitration_burst": 0, 00:14:41.787 "low_priority_weight": 0, 00:14:41.787 "medium_priority_weight": 0, 00:14:41.787 "high_priority_weight": 0, 00:14:41.787 "nvme_adminq_poll_period_us": 10000, 00:14:41.787 "nvme_ioq_poll_period_us": 0, 00:14:41.787 "io_queue_requests": 512, 00:14:41.787 "delay_cmd_submit": true, 00:14:41.787 "transport_retry_count": 4, 00:14:41.787 "bdev_retry_count": 3, 00:14:41.787 "transport_ack_timeout": 0, 00:14:41.787 "ctrlr_loss_timeout_sec": 0, 00:14:41.787 "reconnect_delay_sec": 0, 00:14:41.787 "fast_io_fail_timeout_sec": 0, 00:14:41.787 "disable_auto_failback": false, 00:14:41.787 "generate_uuids": false, 00:14:41.787 "transport_tos": 0, 00:14:41.787 "nvme_error_stat": false, 00:14:41.787 "rdma_srq_size": 0, 00:14:41.787 "io_path_stat": false, 00:14:41.787 "allow_accel_sequence": false, 00:14:41.787 "rdma_max_cq_size": 0, 00:14:41.787 "rdma_cm_event_timeout_ms": 0, 00:14:41.787 "dhchap_digests": [ 00:14:41.787 "sha256", 00:14:41.787 "sha384", 00:14:41.787 "sha512" 00:14:41.787 ], 00:14:41.787 "dhchap_dhgroups": [ 00:14:41.787 "null", 00:14:41.787 "ffdhe2048", 00:14:41.787 "ffdhe3072", 00:14:41.787 "ffdhe4096", 00:14:41.787 "ffdhe6144", 00:14:41.787 "ffdhe8192" 00:14:41.787 ] 00:14:41.787 } 00:14:41.787 }, 00:14:41.787 { 00:14:41.787 "method": "bdev_nvme_attach_controller", 00:14:41.787 "params": { 00:14:41.787 "name": "TLSTEST", 00:14:41.787 "trtype": "TCP", 00:14:41.787 "adrfam": "IPv4", 00:14:41.787 "traddr": "10.0.0.2", 00:14:41.787 "trsvcid": "4420", 00:14:41.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.787 "prchk_reftag": false, 00:14:41.787 "prchk_guard": false, 00:14:41.787 "ctrlr_loss_timeout_sec": 0, 00:14:41.787 "reconnect_delay_sec": 0, 00:14:41.787 "fast_io_fail_timeout_sec": 0, 00:14:41.787 "psk": "/tmp/tmp.5IrwQBrtvl", 00:14:41.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:41.787 "hdgst": false, 00:14:41.787 "ddgst": false 00:14:41.787 } 00:14:41.787 }, 00:14:41.787 { 00:14:41.787 "method": "bdev_nvme_set_hotplug", 00:14:41.787 "params": { 00:14:41.787 "period_us": 100000, 00:14:41.787 "enable": false 00:14:41.787 } 00:14:41.787 }, 00:14:41.787 { 00:14:41.787 "method": "bdev_wait_for_examine" 00:14:41.787 } 00:14:41.787 ] 00:14:41.787 }, 00:14:41.787 { 00:14:41.787 "subsystem": "nbd", 00:14:41.787 "config": [] 00:14:41.787 } 00:14:41.787 ] 00:14:41.787 }' 00:14:41.787 14:19:23 -- target/tls.sh@199 -- # killprocess 3155161 00:14:41.787 14:19:23 -- common/autotest_common.sh@936 -- # '[' -z 3155161 ']' 00:14:41.787 14:19:23 -- common/autotest_common.sh@940 -- # kill -0 3155161 00:14:41.787 14:19:23 -- common/autotest_common.sh@941 -- # uname 00:14:41.787 14:19:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:41.787 14:19:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3155161 00:14:41.787 14:19:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:41.787 14:19:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:41.787 14:19:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3155161' 00:14:41.787 killing process with pid 3155161 00:14:41.787 14:19:23 -- common/autotest_common.sh@955 -- # kill 3155161 00:14:41.787 Received shutdown signal, test time was about 10.000000 seconds 00:14:41.787 00:14:41.787 Latency(us) 00:14:41.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.787 =================================================================================================================== 00:14:41.787 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:41.787 [2024-04-26 14:19:23.273140] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:41.787 14:19:23 -- common/autotest_common.sh@960 -- # wait 3155161 00:14:42.046 14:19:23 -- target/tls.sh@200 -- # killprocess 3154937 00:14:42.046 14:19:23 -- common/autotest_common.sh@936 -- # '[' -z 3154937 ']' 00:14:42.046 14:19:23 -- common/autotest_common.sh@940 -- # kill -0 3154937 00:14:42.046 14:19:23 -- common/autotest_common.sh@941 -- # uname 00:14:42.046 14:19:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:42.046 14:19:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3154937 00:14:42.046 14:19:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:42.046 14:19:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:42.046 14:19:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3154937' 00:14:42.046 killing process with pid 3154937 00:14:42.046 14:19:23 -- common/autotest_common.sh@955 -- # kill 3154937 00:14:42.046 [2024-04-26 14:19:23.515535] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:42.046 14:19:23 -- common/autotest_common.sh@960 -- # wait 3154937 00:14:42.304 14:19:23 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:42.304 14:19:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:42.304 14:19:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:42.304 14:19:23 -- target/tls.sh@203 -- # echo '{ 00:14:42.304 "subsystems": [ 00:14:42.304 { 00:14:42.304 "subsystem": "keyring", 00:14:42.304 "config": [] 00:14:42.304 }, 00:14:42.304 { 00:14:42.304 "subsystem": "iobuf", 00:14:42.304 "config": [ 00:14:42.304 { 00:14:42.304 "method": "iobuf_set_options", 00:14:42.304 "params": { 00:14:42.304 "small_pool_count": 8192, 00:14:42.304 "large_pool_count": 1024, 00:14:42.304 "small_bufsize": 8192, 00:14:42.304 "large_bufsize": 135168 00:14:42.304 } 00:14:42.304 } 00:14:42.304 ] 00:14:42.304 }, 00:14:42.304 { 00:14:42.304 "subsystem": "sock", 00:14:42.304 "config": [ 00:14:42.304 { 00:14:42.304 "method": "sock_impl_set_options", 00:14:42.304 "params": { 00:14:42.305 "impl_name": "posix", 00:14:42.305 "recv_buf_size": 2097152, 00:14:42.305 "send_buf_size": 2097152, 00:14:42.305 "enable_recv_pipe": true, 00:14:42.305 "enable_quickack": false, 00:14:42.305 "enable_placement_id": 0, 00:14:42.305 "enable_zerocopy_send_server": true, 00:14:42.305 "enable_zerocopy_send_client": false, 00:14:42.305 "zerocopy_threshold": 0, 00:14:42.305 "tls_version": 0, 00:14:42.305 "enable_ktls": false 00:14:42.305 } 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "method": "sock_impl_set_options", 00:14:42.305 "params": { 00:14:42.305 "impl_name": "ssl", 00:14:42.305 "recv_buf_size": 4096, 00:14:42.305 "send_buf_size": 4096, 00:14:42.305 "enable_recv_pipe": true, 00:14:42.305 "enable_quickack": false, 00:14:42.305 "enable_placement_id": 0, 00:14:42.305 "enable_zerocopy_send_server": true, 00:14:42.305 "enable_zerocopy_send_client": false, 00:14:42.305 "zerocopy_threshold": 0, 00:14:42.305 "tls_version": 0, 00:14:42.305 "enable_ktls": false 00:14:42.305 } 00:14:42.305 } 00:14:42.305 ] 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "subsystem": "vmd", 00:14:42.305 "config": [] 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "subsystem": "accel", 00:14:42.305 "config": [ 00:14:42.305 { 00:14:42.305 "method": "accel_set_options", 00:14:42.305 "params": { 00:14:42.305 "small_cache_size": 128, 00:14:42.305 "large_cache_size": 16, 00:14:42.305 "task_count": 2048, 00:14:42.305 "sequence_count": 2048, 00:14:42.305 "buf_count": 2048 00:14:42.305 } 00:14:42.305 } 00:14:42.305 ] 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "subsystem": "bdev", 00:14:42.305 "config": [ 00:14:42.305 { 00:14:42.305 "method": "bdev_set_options", 00:14:42.305 "params": { 00:14:42.305 "bdev_io_pool_size": 65535, 00:14:42.305 "bdev_io_cache_size": 256, 00:14:42.305 "bdev_auto_examine": true, 00:14:42.305 "iobuf_small_cache_size": 128, 00:14:42.305 "iobuf_large_cache_size": 16 00:14:42.305 } 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "method": "bdev_raid_set_options", 00:14:42.305 "params": { 00:14:42.305 "process_window_size_kb": 1024 00:14:42.305 } 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "method": "bdev_iscsi_set_options", 00:14:42.305 "params": { 00:14:42.305 "timeout_sec": 30 00:14:42.305 } 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "method": "bdev_nvme_set_options", 00:14:42.305 "params": { 00:14:42.305 "action_on_timeout": "none", 00:14:42.305 "timeout_us": 0, 00:14:42.305 "timeout_admin_us": 0, 00:14:42.305 "keep_alive_timeout_ms": 10000, 00:14:42.305 "arbitration_burst": 0, 00:14:42.305 "low_priority_weight": 0, 00:14:42.305 "medium_priority_weight": 0, 00:14:42.305 "high_priority_weight": 0, 00:14:42.305 "nvme_adminq_poll_period_us": 10000, 00:14:42.305 "nvme_ioq_poll_period_us": 0, 00:14:42.305 "io_queue_requests": 0, 00:14:42.305 "delay_cmd_submit": true, 00:14:42.305 "transport_retry_count": 4, 00:14:42.305 "bdev_retry_count": 3, 00:14:42.305 "transport_ack_timeout": 0, 00:14:42.305 "ctrlr_loss_timeout_sec": 0, 00:14:42.305 "reconnect_delay_sec": 0, 00:14:42.305 "fast_io_fail_timeout_sec": 0, 00:14:42.305 "disable_auto_failback": false, 00:14:42.305 "generate_uuids": false, 00:14:42.305 "transport_tos": 0, 00:14:42.305 "nvme_error_stat": false, 00:14:42.305 "rdma_srq_size": 0, 00:14:42.305 "io_path_stat": false, 00:14:42.305 "allow_accel_sequence": false, 00:14:42.305 "rdma_max_cq_size": 0, 00:14:42.305 "rdma_cm_event_timeout_ms": 0, 00:14:42.305 "dhchap_digests": [ 00:14:42.305 "sha256", 00:14:42.305 "sha384", 00:14:42.305 "sha512" 00:14:42.305 ], 00:14:42.305 "dhchap_dhgroups": [ 00:14:42.305 "null", 00:14:42.305 "ffdhe2048", 00:14:42.305 "ffdhe3072", 00:14:42.305 "ffdhe4096", 00:14:42.305 "ffdhe6144", 00:14:42.305 "ffdhe8192" 00:14:42.305 ] 00:14:42.305 } 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "method": "bdev_nvme_set_hotplug", 00:14:42.305 "params": { 00:14:42.305 "period_us": 100000, 00:14:42.305 "enable": false 00:14:42.305 } 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "method": "bdev_malloc_create", 00:14:42.305 "params": { 00:14:42.305 "name": "malloc0", 00:14:42.305 "num_blocks": 8192, 00:14:42.305 "block_size": 4096, 00:14:42.305 "physical_block_size": 4096, 00:14:42.305 "uuid": "6d67177f-5d93-4a87-aeae-5f5e32048f50", 00:14:42.305 "optimal_io_boundary": 0 00:14:42.305 } 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "method": "bdev_wait_for_examine" 00:14:42.305 } 00:14:42.305 ] 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "subsystem": "nbd", 00:14:42.305 "config": [] 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "subsystem": "scheduler", 00:14:42.305 "config": [ 00:14:42.305 { 00:14:42.305 "method": "framework_set_scheduler", 00:14:42.305 "params": { 00:14:42.305 "name": "static" 00:14:42.305 } 00:14:42.305 } 00:14:42.305 ] 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "subsystem": "nvmf", 00:14:42.305 "config": [ 00:14:42.305 { 00:14:42.305 "method": "nvmf_set_config", 00:14:42.305 "params": { 00:14:42.305 "discovery_filter": "match_any", 00:14:42.305 "admin_cmd_passthru": { 00:14:42.305 "identify_ctrlr": false 00:14:42.305 } 00:14:42.305 } 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "method": "nvmf_set_max_subsystems", 00:14:42.305 "params": { 00:14:42.305 "max_subsystems": 1024 00:14:42.305 } 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "method": "nvmf_set_crdt", 00:14:42.305 "params": { 00:14:42.305 "crdt1": 0, 00:14:42.305 "crdt2": 0, 00:14:42.305 "crdt3": 0 00:14:42.305 } 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "method": "nvmf_create_transport", 00:14:42.305 "params": { 00:14:42.305 "trtype": "TCP", 00:14:42.305 "max_queue_depth": 128, 00:14:42.305 "max_io_qpairs_per_ctrlr": 127, 00:14:42.305 "in_capsule_data_size": 4096, 00:14:42.305 "max_io_size": 131072, 00:14:42.305 "io_unit_size": 131072, 00:14:42.305 "max_aq_depth": 128, 00:14:42.305 "num_shared_buffers": 511, 00:14:42.305 "buf_cache_size": 4294967295, 00:14:42.305 "dif_insert_or_strip": false, 00:14:42.305 "zcopy": false, 00:14:42.305 "c2h_success": false, 00:14:42.305 "sock_priority": 0, 00:14:42.305 "abort_timeout_sec": 1, 00:14:42.305 "ack_timeout": 0 00:14:42.305 } 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "method": "nvmf_create_subsystem", 00:14:42.305 "params": { 00:14:42.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.305 "allow_any_host": false, 00:14:42.305 "serial_number": "SPDK00000000000001", 00:14:42.305 "model_number": "SPDK bdev Controller", 00:14:42.305 "max_namespaces": 10, 00:14:42.305 "min_cntlid": 1, 00:14:42.305 "max_cntlid": 65519, 00:14:42.305 "ana_reporting": false 00:14:42.305 } 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "method": "nvmf_subsystem_add_host", 00:14:42.305 "params": { 00:14:42.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.305 "host": "nqn.2016-06.io.spdk:host1", 00:14:42.305 "psk": "/tmp/tmp.5IrwQBrtvl" 00:14:42.305 } 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "method": "nvmf_subsystem_add_ns", 00:14:42.305 "params": { 00:14:42.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.305 "namespace": { 00:14:42.305 "nsid": 1, 00:14:42.305 "bdev_name": "malloc0", 00:14:42.305 "nguid": "6D67177F5D934A87AEAE5F5E32048F50", 00:14:42.305 "uuid": "6d67177f-5d93-4a87-aeae-5f5e32048f50", 00:14:42.305 "no_auto_visible": false 00:14:42.305 } 00:14:42.305 } 00:14:42.305 }, 00:14:42.305 { 00:14:42.305 "method": "nvmf_subsystem_add_listener", 00:14:42.305 "params": { 00:14:42.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.305 "listen_address": { 00:14:42.305 "trtype": "TCP", 00:14:42.306 "adrfam": "IPv4", 00:14:42.306 "traddr": "10.0.0.2", 00:14:42.306 "trsvcid": "4420" 00:14:42.306 }, 00:14:42.306 "secure_channel": true 00:14:42.306 } 00:14:42.306 } 00:14:42.306 ] 00:14:42.306 } 00:14:42.306 ] 00:14:42.306 }' 00:14:42.306 14:19:23 -- common/autotest_common.sh@10 -- # set +x 00:14:42.306 14:19:23 -- nvmf/common.sh@470 -- # nvmfpid=3155369 00:14:42.306 14:19:23 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:42.306 14:19:23 -- nvmf/common.sh@471 -- # waitforlisten 3155369 00:14:42.306 14:19:23 -- common/autotest_common.sh@817 -- # '[' -z 3155369 ']' 00:14:42.306 14:19:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.306 14:19:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:42.306 14:19:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.306 14:19:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:42.306 14:19:23 -- common/autotest_common.sh@10 -- # set +x 00:14:42.306 [2024-04-26 14:19:23.798256] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:14:42.306 [2024-04-26 14:19:23.798342] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.306 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.306 [2024-04-26 14:19:23.861764] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.564 [2024-04-26 14:19:23.977572] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.564 [2024-04-26 14:19:23.977648] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.564 [2024-04-26 14:19:23.977667] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:42.564 [2024-04-26 14:19:23.977683] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:42.564 [2024-04-26 14:19:23.977695] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.564 [2024-04-26 14:19:23.977784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.822 [2024-04-26 14:19:24.188123] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.822 [2024-04-26 14:19:24.204079] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:42.822 [2024-04-26 14:19:24.220131] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:42.822 [2024-04-26 14:19:24.231803] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:43.388 14:19:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:43.388 14:19:24 -- common/autotest_common.sh@850 -- # return 0 00:14:43.388 14:19:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:43.388 14:19:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:43.388 14:19:24 -- common/autotest_common.sh@10 -- # set +x 00:14:43.388 14:19:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:43.388 14:19:24 -- target/tls.sh@207 -- # bdevperf_pid=3155491 00:14:43.388 14:19:24 -- target/tls.sh@208 -- # waitforlisten 3155491 /var/tmp/bdevperf.sock 00:14:43.388 14:19:24 -- common/autotest_common.sh@817 -- # '[' -z 3155491 ']' 00:14:43.388 14:19:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:43.388 14:19:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:43.388 14:19:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:43.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:43.388 14:19:24 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:43.388 14:19:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:43.388 14:19:24 -- common/autotest_common.sh@10 -- # set +x 00:14:43.388 14:19:24 -- target/tls.sh@204 -- # echo '{ 00:14:43.388 "subsystems": [ 00:14:43.388 { 00:14:43.388 "subsystem": "keyring", 00:14:43.388 "config": [] 00:14:43.388 }, 00:14:43.388 { 00:14:43.388 "subsystem": "iobuf", 00:14:43.388 "config": [ 00:14:43.388 { 00:14:43.388 "method": "iobuf_set_options", 00:14:43.388 "params": { 00:14:43.388 "small_pool_count": 8192, 00:14:43.388 "large_pool_count": 1024, 00:14:43.388 "small_bufsize": 8192, 00:14:43.388 "large_bufsize": 135168 00:14:43.388 } 00:14:43.388 } 00:14:43.388 ] 00:14:43.388 }, 00:14:43.388 { 00:14:43.388 "subsystem": "sock", 00:14:43.388 "config": [ 00:14:43.388 { 00:14:43.388 "method": "sock_impl_set_options", 00:14:43.388 "params": { 00:14:43.388 "impl_name": "posix", 00:14:43.388 "recv_buf_size": 2097152, 00:14:43.388 "send_buf_size": 2097152, 00:14:43.388 "enable_recv_pipe": true, 00:14:43.388 "enable_quickack": false, 00:14:43.388 "enable_placement_id": 0, 00:14:43.388 "enable_zerocopy_send_server": true, 00:14:43.388 "enable_zerocopy_send_client": false, 00:14:43.388 "zerocopy_threshold": 0, 00:14:43.388 "tls_version": 0, 00:14:43.388 "enable_ktls": false 00:14:43.388 } 00:14:43.388 }, 00:14:43.388 { 00:14:43.388 "method": "sock_impl_set_options", 00:14:43.388 "params": { 00:14:43.388 "impl_name": "ssl", 00:14:43.388 "recv_buf_size": 4096, 00:14:43.388 "send_buf_size": 4096, 00:14:43.388 "enable_recv_pipe": true, 00:14:43.388 "enable_quickack": false, 00:14:43.388 "enable_placement_id": 0, 00:14:43.388 "enable_zerocopy_send_server": true, 00:14:43.388 "enable_zerocopy_send_client": false, 00:14:43.388 "zerocopy_threshold": 0, 00:14:43.388 "tls_version": 0, 00:14:43.388 "enable_ktls": false 00:14:43.388 } 00:14:43.388 } 00:14:43.388 ] 00:14:43.388 }, 00:14:43.388 { 00:14:43.388 "subsystem": "vmd", 00:14:43.388 "config": [] 00:14:43.388 }, 00:14:43.388 { 00:14:43.388 "subsystem": "accel", 00:14:43.388 "config": [ 00:14:43.388 { 00:14:43.388 "method": "accel_set_options", 00:14:43.388 "params": { 00:14:43.388 "small_cache_size": 128, 00:14:43.388 "large_cache_size": 16, 00:14:43.388 "task_count": 2048, 00:14:43.388 "sequence_count": 2048, 00:14:43.388 "buf_count": 2048 00:14:43.388 } 00:14:43.388 } 00:14:43.388 ] 00:14:43.388 }, 00:14:43.389 { 00:14:43.389 "subsystem": "bdev", 00:14:43.389 "config": [ 00:14:43.389 { 00:14:43.389 "method": "bdev_set_options", 00:14:43.389 "params": { 00:14:43.389 "bdev_io_pool_size": 65535, 00:14:43.389 "bdev_io_cache_size": 256, 00:14:43.389 "bdev_auto_examine": true, 00:14:43.389 "iobuf_small_cache_size": 128, 00:14:43.389 "iobuf_large_cache_size": 16 00:14:43.389 } 00:14:43.389 }, 00:14:43.389 { 00:14:43.389 "method": "bdev_raid_set_options", 00:14:43.389 "params": { 00:14:43.389 "process_window_size_kb": 1024 00:14:43.389 } 00:14:43.389 }, 00:14:43.389 { 00:14:43.389 "method": "bdev_iscsi_set_options", 00:14:43.389 "params": { 00:14:43.389 "timeout_sec": 30 00:14:43.389 } 00:14:43.389 }, 00:14:43.389 { 00:14:43.389 "method": "bdev_nvme_set_options", 00:14:43.389 "params": { 00:14:43.389 "action_on_timeout": "none", 00:14:43.389 "timeout_us": 0, 00:14:43.389 "timeout_admin_us": 0, 00:14:43.389 "keep_alive_timeout_ms": 10000, 00:14:43.389 "arbitration_burst": 0, 00:14:43.389 "low_priority_weight": 0, 00:14:43.389 "medium_priority_weight": 0, 00:14:43.389 "high_priority_weight": 0, 00:14:43.389 "nvme_adminq_poll_period_us": 10000, 00:14:43.389 "nvme_ioq_poll_period_us": 0, 00:14:43.389 "io_queue_requests": 512, 00:14:43.389 "delay_cmd_submit": true, 00:14:43.389 "transport_retry_count": 4, 00:14:43.389 "bdev_retry_count": 3, 00:14:43.389 "transport_ack_timeout": 0, 00:14:43.389 "ctrlr_loss_timeout_sec": 0, 00:14:43.389 "reconnect_delay_sec": 0, 00:14:43.389 "fast_io_fail_timeout_sec": 0, 00:14:43.389 "disable_auto_failback": false, 00:14:43.389 "generate_uuids": false, 00:14:43.389 "transport_tos": 0, 00:14:43.389 "nvme_error_stat": false, 00:14:43.389 "rdma_srq_size": 0, 00:14:43.389 "io_path_stat": false, 00:14:43.389 "allow_accel_sequence": false, 00:14:43.389 "rdma_max_cq_size": 0, 00:14:43.389 "rdma_cm_event_timeout_ms": 0, 00:14:43.389 "dhchap_digests": [ 00:14:43.389 "sha256", 00:14:43.389 "sha384", 00:14:43.389 "sha512" 00:14:43.389 ], 00:14:43.389 "dhchap_dhgroups": [ 00:14:43.389 "null", 00:14:43.389 "ffdhe2048", 00:14:43.389 "ffdhe3072", 00:14:43.389 "ffdhe4096", 00:14:43.389 "ffdhe6144", 00:14:43.389 "ffdhe8192" 00:14:43.389 ] 00:14:43.389 } 00:14:43.389 }, 00:14:43.389 { 00:14:43.389 "method": "bdev_nvme_attach_controller", 00:14:43.389 "params": { 00:14:43.389 "name": "TLSTEST", 00:14:43.389 "trtype": "TCP", 00:14:43.389 "adrfam": "IPv4", 00:14:43.389 "traddr": "10.0.0.2", 00:14:43.389 "trsvcid": "4420", 00:14:43.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.389 "prchk_reftag": false, 00:14:43.389 "prchk_guard": false, 00:14:43.389 "ctrlr_loss_timeout_sec": 0, 00:14:43.389 "reconnect_delay_sec": 0, 00:14:43.389 "fast_io_fail_timeout_sec": 0, 00:14:43.389 "psk": "/tmp/tmp.5IrwQBrtvl", 00:14:43.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:43.389 "hdgst": false, 00:14:43.389 "ddgst": false 00:14:43.389 } 00:14:43.389 }, 00:14:43.389 { 00:14:43.389 "method": "bdev_nvme_set_hotplug", 00:14:43.389 "params": { 00:14:43.389 "period_us": 100000, 00:14:43.389 "enable": false 00:14:43.389 } 00:14:43.389 }, 00:14:43.389 { 00:14:43.389 "method": "bdev_wait_for_examine" 00:14:43.389 } 00:14:43.389 ] 00:14:43.389 }, 00:14:43.389 { 00:14:43.389 "subsystem": "nbd", 00:14:43.389 "config": [] 00:14:43.389 } 00:14:43.389 ] 00:14:43.389 }' 00:14:43.389 [2024-04-26 14:19:24.844563] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:14:43.389 [2024-04-26 14:19:24.844674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3155491 ] 00:14:43.389 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.389 [2024-04-26 14:19:24.898612] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.647 [2024-04-26 14:19:25.016039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.647 [2024-04-26 14:19:25.170037] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:43.647 [2024-04-26 14:19:25.170171] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:44.582 14:19:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:44.582 14:19:25 -- common/autotest_common.sh@850 -- # return 0 00:14:44.582 14:19:25 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:44.582 Running I/O for 10 seconds... 00:14:54.549 00:14:54.549 Latency(us) 00:14:54.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.549 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:54.549 Verification LBA range: start 0x0 length 0x2000 00:14:54.549 TLSTESTn1 : 10.02 3241.34 12.66 0.00 0.00 39416.70 7427.41 57865.86 00:14:54.549 =================================================================================================================== 00:14:54.549 Total : 3241.34 12.66 0.00 0.00 39416.70 7427.41 57865.86 00:14:54.549 0 00:14:54.549 14:19:36 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:54.549 14:19:36 -- target/tls.sh@214 -- # killprocess 3155491 00:14:54.549 14:19:36 -- common/autotest_common.sh@936 -- # '[' -z 3155491 ']' 00:14:54.549 14:19:36 -- common/autotest_common.sh@940 -- # kill -0 3155491 00:14:54.549 14:19:36 -- common/autotest_common.sh@941 -- # uname 00:14:54.549 14:19:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:54.549 14:19:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3155491 00:14:54.549 14:19:36 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:14:54.549 14:19:36 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:14:54.549 14:19:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3155491' 00:14:54.549 killing process with pid 3155491 00:14:54.549 14:19:36 -- common/autotest_common.sh@955 -- # kill 3155491 00:14:54.549 Received shutdown signal, test time was about 10.000000 seconds 00:14:54.549 00:14:54.549 Latency(us) 00:14:54.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.549 =================================================================================================================== 00:14:54.549 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:54.549 [2024-04-26 14:19:36.083501] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:54.549 14:19:36 -- common/autotest_common.sh@960 -- # wait 3155491 00:14:54.807 14:19:36 -- target/tls.sh@215 -- # killprocess 3155369 00:14:54.807 14:19:36 -- common/autotest_common.sh@936 -- # '[' -z 3155369 ']' 00:14:54.807 14:19:36 -- common/autotest_common.sh@940 -- # kill -0 3155369 00:14:54.807 14:19:36 -- common/autotest_common.sh@941 -- # uname 00:14:54.807 14:19:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:54.807 14:19:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3155369 00:14:54.807 14:19:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:54.807 14:19:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:54.807 14:19:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3155369' 00:14:54.807 killing process with pid 3155369 00:14:54.807 14:19:36 -- common/autotest_common.sh@955 -- # kill 3155369 00:14:54.807 [2024-04-26 14:19:36.326358] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:54.807 14:19:36 -- common/autotest_common.sh@960 -- # wait 3155369 00:14:55.065 14:19:36 -- target/tls.sh@218 -- # nvmfappstart 00:14:55.065 14:19:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:55.065 14:19:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:55.065 14:19:36 -- common/autotest_common.sh@10 -- # set +x 00:14:55.065 14:19:36 -- nvmf/common.sh@470 -- # nvmfpid=3156509 00:14:55.065 14:19:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:55.065 14:19:36 -- nvmf/common.sh@471 -- # waitforlisten 3156509 00:14:55.065 14:19:36 -- common/autotest_common.sh@817 -- # '[' -z 3156509 ']' 00:14:55.065 14:19:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.065 14:19:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:55.065 14:19:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.065 14:19:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:55.065 14:19:36 -- common/autotest_common.sh@10 -- # set +x 00:14:55.065 [2024-04-26 14:19:36.608971] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:14:55.065 [2024-04-26 14:19:36.609071] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.323 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.323 [2024-04-26 14:19:36.674683] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.323 [2024-04-26 14:19:36.792194] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.323 [2024-04-26 14:19:36.792250] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.323 [2024-04-26 14:19:36.792266] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.323 [2024-04-26 14:19:36.792279] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.323 [2024-04-26 14:19:36.792291] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.324 [2024-04-26 14:19:36.792339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.582 14:19:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:55.582 14:19:36 -- common/autotest_common.sh@850 -- # return 0 00:14:55.582 14:19:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:55.582 14:19:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:55.582 14:19:36 -- common/autotest_common.sh@10 -- # set +x 00:14:55.582 14:19:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.582 14:19:36 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.5IrwQBrtvl 00:14:55.582 14:19:36 -- target/tls.sh@49 -- # local key=/tmp/tmp.5IrwQBrtvl 00:14:55.582 14:19:36 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:55.840 [2024-04-26 14:19:37.194917] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.840 14:19:37 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:56.098 14:19:37 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:56.357 [2024-04-26 14:19:37.780448] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:56.357 [2024-04-26 14:19:37.780682] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.357 14:19:37 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:56.615 malloc0 00:14:56.615 14:19:38 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:56.873 14:19:38 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5IrwQBrtvl 00:14:57.131 [2024-04-26 14:19:38.677456] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:57.131 14:19:38 -- target/tls.sh@222 -- # bdevperf_pid=3156732 00:14:57.131 14:19:38 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:57.131 14:19:38 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:57.131 14:19:38 -- target/tls.sh@225 -- # waitforlisten 3156732 /var/tmp/bdevperf.sock 00:14:57.131 14:19:38 -- common/autotest_common.sh@817 -- # '[' -z 3156732 ']' 00:14:57.131 14:19:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:57.131 14:19:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:57.131 14:19:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:57.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:57.131 14:19:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:57.131 14:19:38 -- common/autotest_common.sh@10 -- # set +x 00:14:57.389 [2024-04-26 14:19:38.740330] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:14:57.389 [2024-04-26 14:19:38.740426] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3156732 ] 00:14:57.389 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.389 [2024-04-26 14:19:38.800684] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.389 [2024-04-26 14:19:38.915405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.647 14:19:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:57.647 14:19:39 -- common/autotest_common.sh@850 -- # return 0 00:14:57.647 14:19:39 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5IrwQBrtvl 00:14:57.905 14:19:39 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:58.163 [2024-04-26 14:19:39.580424] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:58.163 nvme0n1 00:14:58.163 14:19:39 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:58.422 Running I/O for 1 seconds... 00:14:59.356 00:14:59.356 Latency(us) 00:14:59.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.356 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:59.356 Verification LBA range: start 0x0 length 0x2000 00:14:59.356 nvme0n1 : 1.02 2210.84 8.64 0.00 0.00 57301.96 11942.12 57089.14 00:14:59.356 =================================================================================================================== 00:14:59.356 Total : 2210.84 8.64 0.00 0.00 57301.96 11942.12 57089.14 00:14:59.356 0 00:14:59.356 14:19:40 -- target/tls.sh@234 -- # killprocess 3156732 00:14:59.356 14:19:40 -- common/autotest_common.sh@936 -- # '[' -z 3156732 ']' 00:14:59.356 14:19:40 -- common/autotest_common.sh@940 -- # kill -0 3156732 00:14:59.356 14:19:40 -- common/autotest_common.sh@941 -- # uname 00:14:59.356 14:19:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:59.356 14:19:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3156732 00:14:59.356 14:19:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:59.356 14:19:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:59.356 14:19:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3156732' 00:14:59.356 killing process with pid 3156732 00:14:59.356 14:19:40 -- common/autotest_common.sh@955 -- # kill 3156732 00:14:59.356 Received shutdown signal, test time was about 1.000000 seconds 00:14:59.356 00:14:59.356 Latency(us) 00:14:59.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.356 =================================================================================================================== 00:14:59.356 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:59.356 14:19:40 -- common/autotest_common.sh@960 -- # wait 3156732 00:14:59.614 14:19:41 -- target/tls.sh@235 -- # killprocess 3156509 00:14:59.614 14:19:41 -- common/autotest_common.sh@936 -- # '[' -z 3156509 ']' 00:14:59.614 14:19:41 -- common/autotest_common.sh@940 -- # kill -0 3156509 00:14:59.614 14:19:41 -- common/autotest_common.sh@941 -- # uname 00:14:59.614 14:19:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:59.614 14:19:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3156509 00:14:59.614 14:19:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:59.614 14:19:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:59.614 14:19:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3156509' 00:14:59.614 killing process with pid 3156509 00:14:59.614 14:19:41 -- common/autotest_common.sh@955 -- # kill 3156509 00:14:59.614 [2024-04-26 14:19:41.094089] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:59.614 14:19:41 -- common/autotest_common.sh@960 -- # wait 3156509 00:14:59.873 14:19:41 -- target/tls.sh@238 -- # nvmfappstart 00:14:59.873 14:19:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:59.873 14:19:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:59.873 14:19:41 -- common/autotest_common.sh@10 -- # set +x 00:14:59.873 14:19:41 -- nvmf/common.sh@470 -- # nvmfpid=3157030 00:14:59.873 14:19:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:59.873 14:19:41 -- nvmf/common.sh@471 -- # waitforlisten 3157030 00:14:59.873 14:19:41 -- common/autotest_common.sh@817 -- # '[' -z 3157030 ']' 00:14:59.873 14:19:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.873 14:19:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:59.873 14:19:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.873 14:19:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:59.873 14:19:41 -- common/autotest_common.sh@10 -- # set +x 00:14:59.873 [2024-04-26 14:19:41.373693] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:14:59.873 [2024-04-26 14:19:41.373787] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.873 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.873 [2024-04-26 14:19:41.439138] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.132 [2024-04-26 14:19:41.556044] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.132 [2024-04-26 14:19:41.556112] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.132 [2024-04-26 14:19:41.556128] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.132 [2024-04-26 14:19:41.556142] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.132 [2024-04-26 14:19:41.556154] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.132 [2024-04-26 14:19:41.556187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.132 14:19:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:00.132 14:19:41 -- common/autotest_common.sh@850 -- # return 0 00:15:00.132 14:19:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:00.132 14:19:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:00.132 14:19:41 -- common/autotest_common.sh@10 -- # set +x 00:15:00.132 14:19:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.132 14:19:41 -- target/tls.sh@239 -- # rpc_cmd 00:15:00.132 14:19:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:00.132 14:19:41 -- common/autotest_common.sh@10 -- # set +x 00:15:00.132 [2024-04-26 14:19:41.688459] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.391 malloc0 00:15:00.391 [2024-04-26 14:19:41.719162] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:00.391 [2024-04-26 14:19:41.719388] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.391 14:19:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:00.391 14:19:41 -- target/tls.sh@252 -- # bdevperf_pid=3157061 00:15:00.391 14:19:41 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:00.391 14:19:41 -- target/tls.sh@254 -- # waitforlisten 3157061 /var/tmp/bdevperf.sock 00:15:00.391 14:19:41 -- common/autotest_common.sh@817 -- # '[' -z 3157061 ']' 00:15:00.391 14:19:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:00.391 14:19:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:00.391 14:19:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:00.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:00.391 14:19:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:00.391 14:19:41 -- common/autotest_common.sh@10 -- # set +x 00:15:00.391 [2024-04-26 14:19:41.792413] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:15:00.391 [2024-04-26 14:19:41.792511] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3157061 ] 00:15:00.391 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.391 [2024-04-26 14:19:41.854021] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.649 [2024-04-26 14:19:41.971927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.649 14:19:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:00.649 14:19:42 -- common/autotest_common.sh@850 -- # return 0 00:15:00.649 14:19:42 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5IrwQBrtvl 00:15:00.913 14:19:42 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:01.209 [2024-04-26 14:19:42.644994] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:01.209 nvme0n1 00:15:01.209 14:19:42 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:01.506 Running I/O for 1 seconds... 00:15:02.440 00:15:02.440 Latency(us) 00:15:02.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.440 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:02.440 Verification LBA range: start 0x0 length 0x2000 00:15:02.440 nvme0n1 : 1.02 3286.49 12.84 0.00 0.00 38514.73 8107.05 35535.08 00:15:02.440 =================================================================================================================== 00:15:02.440 Total : 3286.49 12.84 0.00 0.00 38514.73 8107.05 35535.08 00:15:02.440 0 00:15:02.440 14:19:43 -- target/tls.sh@263 -- # rpc_cmd save_config 00:15:02.440 14:19:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:02.440 14:19:43 -- common/autotest_common.sh@10 -- # set +x 00:15:02.440 14:19:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:02.440 14:19:43 -- target/tls.sh@263 -- # tgtcfg='{ 00:15:02.440 "subsystems": [ 00:15:02.440 { 00:15:02.440 "subsystem": "keyring", 00:15:02.440 "config": [ 00:15:02.440 { 00:15:02.440 "method": "keyring_file_add_key", 00:15:02.440 "params": { 00:15:02.440 "name": "key0", 00:15:02.440 "path": "/tmp/tmp.5IrwQBrtvl" 00:15:02.440 } 00:15:02.440 } 00:15:02.440 ] 00:15:02.440 }, 00:15:02.440 { 00:15:02.440 "subsystem": "iobuf", 00:15:02.440 "config": [ 00:15:02.440 { 00:15:02.440 "method": "iobuf_set_options", 00:15:02.440 "params": { 00:15:02.440 "small_pool_count": 8192, 00:15:02.440 "large_pool_count": 1024, 00:15:02.440 "small_bufsize": 8192, 00:15:02.440 "large_bufsize": 135168 00:15:02.440 } 00:15:02.440 } 00:15:02.440 ] 00:15:02.440 }, 00:15:02.440 { 00:15:02.440 "subsystem": "sock", 00:15:02.440 "config": [ 00:15:02.440 { 00:15:02.440 "method": "sock_impl_set_options", 00:15:02.440 "params": { 00:15:02.440 "impl_name": "posix", 00:15:02.440 "recv_buf_size": 2097152, 00:15:02.440 "send_buf_size": 2097152, 00:15:02.440 "enable_recv_pipe": true, 00:15:02.440 "enable_quickack": false, 00:15:02.440 "enable_placement_id": 0, 00:15:02.440 "enable_zerocopy_send_server": true, 00:15:02.440 "enable_zerocopy_send_client": false, 00:15:02.440 "zerocopy_threshold": 0, 00:15:02.440 "tls_version": 0, 00:15:02.440 "enable_ktls": false 00:15:02.440 } 00:15:02.440 }, 00:15:02.440 { 00:15:02.440 "method": "sock_impl_set_options", 00:15:02.440 "params": { 00:15:02.440 "impl_name": "ssl", 00:15:02.440 "recv_buf_size": 4096, 00:15:02.440 "send_buf_size": 4096, 00:15:02.440 "enable_recv_pipe": true, 00:15:02.440 "enable_quickack": false, 00:15:02.440 "enable_placement_id": 0, 00:15:02.440 "enable_zerocopy_send_server": true, 00:15:02.440 "enable_zerocopy_send_client": false, 00:15:02.440 "zerocopy_threshold": 0, 00:15:02.440 "tls_version": 0, 00:15:02.440 "enable_ktls": false 00:15:02.440 } 00:15:02.440 } 00:15:02.440 ] 00:15:02.440 }, 00:15:02.440 { 00:15:02.440 "subsystem": "vmd", 00:15:02.440 "config": [] 00:15:02.440 }, 00:15:02.440 { 00:15:02.440 "subsystem": "accel", 00:15:02.440 "config": [ 00:15:02.440 { 00:15:02.440 "method": "accel_set_options", 00:15:02.440 "params": { 00:15:02.440 "small_cache_size": 128, 00:15:02.440 "large_cache_size": 16, 00:15:02.440 "task_count": 2048, 00:15:02.440 "sequence_count": 2048, 00:15:02.440 "buf_count": 2048 00:15:02.440 } 00:15:02.440 } 00:15:02.440 ] 00:15:02.440 }, 00:15:02.440 { 00:15:02.440 "subsystem": "bdev", 00:15:02.440 "config": [ 00:15:02.440 { 00:15:02.440 "method": "bdev_set_options", 00:15:02.440 "params": { 00:15:02.440 "bdev_io_pool_size": 65535, 00:15:02.440 "bdev_io_cache_size": 256, 00:15:02.440 "bdev_auto_examine": true, 00:15:02.440 "iobuf_small_cache_size": 128, 00:15:02.440 "iobuf_large_cache_size": 16 00:15:02.440 } 00:15:02.440 }, 00:15:02.440 { 00:15:02.440 "method": "bdev_raid_set_options", 00:15:02.440 "params": { 00:15:02.440 "process_window_size_kb": 1024 00:15:02.440 } 00:15:02.440 }, 00:15:02.440 { 00:15:02.440 "method": "bdev_iscsi_set_options", 00:15:02.440 "params": { 00:15:02.440 "timeout_sec": 30 00:15:02.440 } 00:15:02.440 }, 00:15:02.440 { 00:15:02.440 "method": "bdev_nvme_set_options", 00:15:02.440 "params": { 00:15:02.440 "action_on_timeout": "none", 00:15:02.440 "timeout_us": 0, 00:15:02.440 "timeout_admin_us": 0, 00:15:02.440 "keep_alive_timeout_ms": 10000, 00:15:02.440 "arbitration_burst": 0, 00:15:02.440 "low_priority_weight": 0, 00:15:02.440 "medium_priority_weight": 0, 00:15:02.440 "high_priority_weight": 0, 00:15:02.440 "nvme_adminq_poll_period_us": 10000, 00:15:02.440 "nvme_ioq_poll_period_us": 0, 00:15:02.440 "io_queue_requests": 0, 00:15:02.440 "delay_cmd_submit": true, 00:15:02.440 "transport_retry_count": 4, 00:15:02.440 "bdev_retry_count": 3, 00:15:02.440 "transport_ack_timeout": 0, 00:15:02.440 "ctrlr_loss_timeout_sec": 0, 00:15:02.440 "reconnect_delay_sec": 0, 00:15:02.440 "fast_io_fail_timeout_sec": 0, 00:15:02.440 "disable_auto_failback": false, 00:15:02.440 "generate_uuids": false, 00:15:02.440 "transport_tos": 0, 00:15:02.440 "nvme_error_stat": false, 00:15:02.440 "rdma_srq_size": 0, 00:15:02.440 "io_path_stat": false, 00:15:02.440 "allow_accel_sequence": false, 00:15:02.440 "rdma_max_cq_size": 0, 00:15:02.440 "rdma_cm_event_timeout_ms": 0, 00:15:02.440 "dhchap_digests": [ 00:15:02.440 "sha256", 00:15:02.440 "sha384", 00:15:02.440 "sha512" 00:15:02.440 ], 00:15:02.440 "dhchap_dhgroups": [ 00:15:02.440 "null", 00:15:02.440 "ffdhe2048", 00:15:02.440 "ffdhe3072", 00:15:02.440 "ffdhe4096", 00:15:02.440 "ffdhe6144", 00:15:02.440 "ffdhe8192" 00:15:02.440 ] 00:15:02.440 } 00:15:02.440 }, 00:15:02.440 { 00:15:02.440 "method": "bdev_nvme_set_hotplug", 00:15:02.440 "params": { 00:15:02.440 "period_us": 100000, 00:15:02.440 "enable": false 00:15:02.440 } 00:15:02.440 }, 00:15:02.440 { 00:15:02.440 "method": "bdev_malloc_create", 00:15:02.440 "params": { 00:15:02.440 "name": "malloc0", 00:15:02.440 "num_blocks": 8192, 00:15:02.440 "block_size": 4096, 00:15:02.440 "physical_block_size": 4096, 00:15:02.440 "uuid": "a2c86915-d1ff-4e7b-a4e4-dc41fe7f7cfa", 00:15:02.440 "optimal_io_boundary": 0 00:15:02.440 } 00:15:02.440 }, 00:15:02.440 { 00:15:02.440 "method": "bdev_wait_for_examine" 00:15:02.440 } 00:15:02.440 ] 00:15:02.440 }, 00:15:02.440 { 00:15:02.440 "subsystem": "nbd", 00:15:02.440 "config": [] 00:15:02.440 }, 00:15:02.440 { 00:15:02.440 "subsystem": "scheduler", 00:15:02.440 "config": [ 00:15:02.440 { 00:15:02.440 "method": "framework_set_scheduler", 00:15:02.440 "params": { 00:15:02.440 "name": "static" 00:15:02.440 } 00:15:02.440 } 00:15:02.440 ] 00:15:02.440 }, 00:15:02.440 { 00:15:02.440 "subsystem": "nvmf", 00:15:02.440 "config": [ 00:15:02.440 { 00:15:02.440 "method": "nvmf_set_config", 00:15:02.440 "params": { 00:15:02.440 "discovery_filter": "match_any", 00:15:02.440 "admin_cmd_passthru": { 00:15:02.440 "identify_ctrlr": false 00:15:02.440 } 00:15:02.441 } 00:15:02.441 }, 00:15:02.441 { 00:15:02.441 "method": "nvmf_set_max_subsystems", 00:15:02.441 "params": { 00:15:02.441 "max_subsystems": 1024 00:15:02.441 } 00:15:02.441 }, 00:15:02.441 { 00:15:02.441 "method": "nvmf_set_crdt", 00:15:02.441 "params": { 00:15:02.441 "crdt1": 0, 00:15:02.441 "crdt2": 0, 00:15:02.441 "crdt3": 0 00:15:02.441 } 00:15:02.441 }, 00:15:02.441 { 00:15:02.441 "method": "nvmf_create_transport", 00:15:02.441 "params": { 00:15:02.441 "trtype": "TCP", 00:15:02.441 "max_queue_depth": 128, 00:15:02.441 "max_io_qpairs_per_ctrlr": 127, 00:15:02.441 "in_capsule_data_size": 4096, 00:15:02.441 "max_io_size": 131072, 00:15:02.441 "io_unit_size": 131072, 00:15:02.441 "max_aq_depth": 128, 00:15:02.441 "num_shared_buffers": 511, 00:15:02.441 "buf_cache_size": 4294967295, 00:15:02.441 "dif_insert_or_strip": false, 00:15:02.441 "zcopy": false, 00:15:02.441 "c2h_success": false, 00:15:02.441 "sock_priority": 0, 00:15:02.441 "abort_timeout_sec": 1, 00:15:02.441 "ack_timeout": 0 00:15:02.441 } 00:15:02.441 }, 00:15:02.441 { 00:15:02.441 "method": "nvmf_create_subsystem", 00:15:02.441 "params": { 00:15:02.441 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.441 "allow_any_host": false, 00:15:02.441 "serial_number": "00000000000000000000", 00:15:02.441 "model_number": "SPDK bdev Controller", 00:15:02.441 "max_namespaces": 32, 00:15:02.441 "min_cntlid": 1, 00:15:02.441 "max_cntlid": 65519, 00:15:02.441 "ana_reporting": false 00:15:02.441 } 00:15:02.441 }, 00:15:02.441 { 00:15:02.441 "method": "nvmf_subsystem_add_host", 00:15:02.441 "params": { 00:15:02.441 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.441 "host": "nqn.2016-06.io.spdk:host1", 00:15:02.441 "psk": "key0" 00:15:02.441 } 00:15:02.441 }, 00:15:02.441 { 00:15:02.441 "method": "nvmf_subsystem_add_ns", 00:15:02.441 "params": { 00:15:02.441 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.441 "namespace": { 00:15:02.441 "nsid": 1, 00:15:02.441 "bdev_name": "malloc0", 00:15:02.441 "nguid": "A2C86915D1FF4E7BA4E4DC41FE7F7CFA", 00:15:02.441 "uuid": "a2c86915-d1ff-4e7b-a4e4-dc41fe7f7cfa", 00:15:02.441 "no_auto_visible": false 00:15:02.441 } 00:15:02.441 } 00:15:02.441 }, 00:15:02.441 { 00:15:02.441 "method": "nvmf_subsystem_add_listener", 00:15:02.441 "params": { 00:15:02.441 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.441 "listen_address": { 00:15:02.441 "trtype": "TCP", 00:15:02.441 "adrfam": "IPv4", 00:15:02.441 "traddr": "10.0.0.2", 00:15:02.441 "trsvcid": "4420" 00:15:02.441 }, 00:15:02.441 "secure_channel": true 00:15:02.441 } 00:15:02.441 } 00:15:02.441 ] 00:15:02.441 } 00:15:02.441 ] 00:15:02.441 }' 00:15:02.441 14:19:43 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:03.007 14:19:44 -- target/tls.sh@264 -- # bperfcfg='{ 00:15:03.007 "subsystems": [ 00:15:03.007 { 00:15:03.007 "subsystem": "keyring", 00:15:03.007 "config": [ 00:15:03.007 { 00:15:03.007 "method": "keyring_file_add_key", 00:15:03.007 "params": { 00:15:03.007 "name": "key0", 00:15:03.007 "path": "/tmp/tmp.5IrwQBrtvl" 00:15:03.007 } 00:15:03.007 } 00:15:03.007 ] 00:15:03.007 }, 00:15:03.007 { 00:15:03.007 "subsystem": "iobuf", 00:15:03.007 "config": [ 00:15:03.007 { 00:15:03.007 "method": "iobuf_set_options", 00:15:03.007 "params": { 00:15:03.007 "small_pool_count": 8192, 00:15:03.007 "large_pool_count": 1024, 00:15:03.007 "small_bufsize": 8192, 00:15:03.007 "large_bufsize": 135168 00:15:03.007 } 00:15:03.007 } 00:15:03.007 ] 00:15:03.007 }, 00:15:03.007 { 00:15:03.007 "subsystem": "sock", 00:15:03.007 "config": [ 00:15:03.007 { 00:15:03.007 "method": "sock_impl_set_options", 00:15:03.007 "params": { 00:15:03.007 "impl_name": "posix", 00:15:03.007 "recv_buf_size": 2097152, 00:15:03.007 "send_buf_size": 2097152, 00:15:03.007 "enable_recv_pipe": true, 00:15:03.007 "enable_quickack": false, 00:15:03.007 "enable_placement_id": 0, 00:15:03.007 "enable_zerocopy_send_server": true, 00:15:03.007 "enable_zerocopy_send_client": false, 00:15:03.007 "zerocopy_threshold": 0, 00:15:03.007 "tls_version": 0, 00:15:03.007 "enable_ktls": false 00:15:03.007 } 00:15:03.007 }, 00:15:03.007 { 00:15:03.007 "method": "sock_impl_set_options", 00:15:03.007 "params": { 00:15:03.007 "impl_name": "ssl", 00:15:03.008 "recv_buf_size": 4096, 00:15:03.008 "send_buf_size": 4096, 00:15:03.008 "enable_recv_pipe": true, 00:15:03.008 "enable_quickack": false, 00:15:03.008 "enable_placement_id": 0, 00:15:03.008 "enable_zerocopy_send_server": true, 00:15:03.008 "enable_zerocopy_send_client": false, 00:15:03.008 "zerocopy_threshold": 0, 00:15:03.008 "tls_version": 0, 00:15:03.008 "enable_ktls": false 00:15:03.008 } 00:15:03.008 } 00:15:03.008 ] 00:15:03.008 }, 00:15:03.008 { 00:15:03.008 "subsystem": "vmd", 00:15:03.008 "config": [] 00:15:03.008 }, 00:15:03.008 { 00:15:03.008 "subsystem": "accel", 00:15:03.008 "config": [ 00:15:03.008 { 00:15:03.008 "method": "accel_set_options", 00:15:03.008 "params": { 00:15:03.008 "small_cache_size": 128, 00:15:03.008 "large_cache_size": 16, 00:15:03.008 "task_count": 2048, 00:15:03.008 "sequence_count": 2048, 00:15:03.008 "buf_count": 2048 00:15:03.008 } 00:15:03.008 } 00:15:03.008 ] 00:15:03.008 }, 00:15:03.008 { 00:15:03.008 "subsystem": "bdev", 00:15:03.008 "config": [ 00:15:03.008 { 00:15:03.008 "method": "bdev_set_options", 00:15:03.008 "params": { 00:15:03.008 "bdev_io_pool_size": 65535, 00:15:03.008 "bdev_io_cache_size": 256, 00:15:03.008 "bdev_auto_examine": true, 00:15:03.008 "iobuf_small_cache_size": 128, 00:15:03.008 "iobuf_large_cache_size": 16 00:15:03.008 } 00:15:03.008 }, 00:15:03.008 { 00:15:03.008 "method": "bdev_raid_set_options", 00:15:03.008 "params": { 00:15:03.008 "process_window_size_kb": 1024 00:15:03.008 } 00:15:03.008 }, 00:15:03.008 { 00:15:03.008 "method": "bdev_iscsi_set_options", 00:15:03.008 "params": { 00:15:03.008 "timeout_sec": 30 00:15:03.008 } 00:15:03.008 }, 00:15:03.008 { 00:15:03.008 "method": "bdev_nvme_set_options", 00:15:03.008 "params": { 00:15:03.008 "action_on_timeout": "none", 00:15:03.008 "timeout_us": 0, 00:15:03.008 "timeout_admin_us": 0, 00:15:03.008 "keep_alive_timeout_ms": 10000, 00:15:03.008 "arbitration_burst": 0, 00:15:03.008 "low_priority_weight": 0, 00:15:03.008 "medium_priority_weight": 0, 00:15:03.008 "high_priority_weight": 0, 00:15:03.008 "nvme_adminq_poll_period_us": 10000, 00:15:03.008 "nvme_ioq_poll_period_us": 0, 00:15:03.008 "io_queue_requests": 512, 00:15:03.008 "delay_cmd_submit": true, 00:15:03.008 "transport_retry_count": 4, 00:15:03.008 "bdev_retry_count": 3, 00:15:03.008 "transport_ack_timeout": 0, 00:15:03.008 "ctrlr_loss_timeout_sec": 0, 00:15:03.008 "reconnect_delay_sec": 0, 00:15:03.008 "fast_io_fail_timeout_sec": 0, 00:15:03.008 "disable_auto_failback": false, 00:15:03.008 "generate_uuids": false, 00:15:03.008 "transport_tos": 0, 00:15:03.008 "nvme_error_stat": false, 00:15:03.008 "rdma_srq_size": 0, 00:15:03.008 "io_path_stat": false, 00:15:03.008 "allow_accel_sequence": false, 00:15:03.008 "rdma_max_cq_size": 0, 00:15:03.008 "rdma_cm_event_timeout_ms": 0, 00:15:03.008 "dhchap_digests": [ 00:15:03.008 "sha256", 00:15:03.008 "sha384", 00:15:03.008 "sha512" 00:15:03.008 ], 00:15:03.008 "dhchap_dhgroups": [ 00:15:03.008 "null", 00:15:03.008 "ffdhe2048", 00:15:03.008 "ffdhe3072", 00:15:03.008 "ffdhe4096", 00:15:03.008 "ffdhe6144", 00:15:03.008 "ffdhe8192" 00:15:03.008 ] 00:15:03.008 } 00:15:03.008 }, 00:15:03.008 { 00:15:03.008 "method": "bdev_nvme_attach_controller", 00:15:03.008 "params": { 00:15:03.008 "name": "nvme0", 00:15:03.008 "trtype": "TCP", 00:15:03.008 "adrfam": "IPv4", 00:15:03.008 "traddr": "10.0.0.2", 00:15:03.008 "trsvcid": "4420", 00:15:03.008 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.008 "prchk_reftag": false, 00:15:03.008 "prchk_guard": false, 00:15:03.008 "ctrlr_loss_timeout_sec": 0, 00:15:03.008 "reconnect_delay_sec": 0, 00:15:03.008 "fast_io_fail_timeout_sec": 0, 00:15:03.008 "psk": "key0", 00:15:03.008 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:03.008 "hdgst": false, 00:15:03.008 "ddgst": false 00:15:03.008 } 00:15:03.008 }, 00:15:03.008 { 00:15:03.008 "method": "bdev_nvme_set_hotplug", 00:15:03.008 "params": { 00:15:03.008 "period_us": 100000, 00:15:03.008 "enable": false 00:15:03.008 } 00:15:03.008 }, 00:15:03.008 { 00:15:03.008 "method": "bdev_enable_histogram", 00:15:03.008 "params": { 00:15:03.008 "name": "nvme0n1", 00:15:03.008 "enable": true 00:15:03.008 } 00:15:03.008 }, 00:15:03.008 { 00:15:03.008 "method": "bdev_wait_for_examine" 00:15:03.008 } 00:15:03.008 ] 00:15:03.008 }, 00:15:03.008 { 00:15:03.008 "subsystem": "nbd", 00:15:03.008 "config": [] 00:15:03.008 } 00:15:03.008 ] 00:15:03.008 }' 00:15:03.008 14:19:44 -- target/tls.sh@266 -- # killprocess 3157061 00:15:03.008 14:19:44 -- common/autotest_common.sh@936 -- # '[' -z 3157061 ']' 00:15:03.008 14:19:44 -- common/autotest_common.sh@940 -- # kill -0 3157061 00:15:03.008 14:19:44 -- common/autotest_common.sh@941 -- # uname 00:15:03.008 14:19:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:03.008 14:19:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3157061 00:15:03.008 14:19:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:03.008 14:19:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:03.008 14:19:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3157061' 00:15:03.008 killing process with pid 3157061 00:15:03.008 14:19:44 -- common/autotest_common.sh@955 -- # kill 3157061 00:15:03.008 Received shutdown signal, test time was about 1.000000 seconds 00:15:03.008 00:15:03.008 Latency(us) 00:15:03.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.008 =================================================================================================================== 00:15:03.008 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:03.008 14:19:44 -- common/autotest_common.sh@960 -- # wait 3157061 00:15:03.268 14:19:44 -- target/tls.sh@267 -- # killprocess 3157030 00:15:03.268 14:19:44 -- common/autotest_common.sh@936 -- # '[' -z 3157030 ']' 00:15:03.268 14:19:44 -- common/autotest_common.sh@940 -- # kill -0 3157030 00:15:03.268 14:19:44 -- common/autotest_common.sh@941 -- # uname 00:15:03.268 14:19:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:03.268 14:19:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3157030 00:15:03.268 14:19:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:03.268 14:19:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:03.268 14:19:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3157030' 00:15:03.268 killing process with pid 3157030 00:15:03.268 14:19:44 -- common/autotest_common.sh@955 -- # kill 3157030 00:15:03.268 14:19:44 -- common/autotest_common.sh@960 -- # wait 3157030 00:15:03.527 14:19:44 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:15:03.527 14:19:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:03.528 14:19:44 -- target/tls.sh@269 -- # echo '{ 00:15:03.528 "subsystems": [ 00:15:03.528 { 00:15:03.528 "subsystem": "keyring", 00:15:03.528 "config": [ 00:15:03.528 { 00:15:03.528 "method": "keyring_file_add_key", 00:15:03.528 "params": { 00:15:03.528 "name": "key0", 00:15:03.528 "path": "/tmp/tmp.5IrwQBrtvl" 00:15:03.528 } 00:15:03.528 } 00:15:03.528 ] 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "subsystem": "iobuf", 00:15:03.528 "config": [ 00:15:03.528 { 00:15:03.528 "method": "iobuf_set_options", 00:15:03.528 "params": { 00:15:03.528 "small_pool_count": 8192, 00:15:03.528 "large_pool_count": 1024, 00:15:03.528 "small_bufsize": 8192, 00:15:03.528 "large_bufsize": 135168 00:15:03.528 } 00:15:03.528 } 00:15:03.528 ] 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "subsystem": "sock", 00:15:03.528 "config": [ 00:15:03.528 { 00:15:03.528 "method": "sock_impl_set_options", 00:15:03.528 "params": { 00:15:03.528 "impl_name": "posix", 00:15:03.528 "recv_buf_size": 2097152, 00:15:03.528 "send_buf_size": 2097152, 00:15:03.528 "enable_recv_pipe": true, 00:15:03.528 "enable_quickack": false, 00:15:03.528 "enable_placement_id": 0, 00:15:03.528 "enable_zerocopy_send_server": true, 00:15:03.528 "enable_zerocopy_send_client": false, 00:15:03.528 "zerocopy_threshold": 0, 00:15:03.528 "tls_version": 0, 00:15:03.528 "enable_ktls": false 00:15:03.528 } 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "method": "sock_impl_set_options", 00:15:03.528 "params": { 00:15:03.528 "impl_name": "ssl", 00:15:03.528 "recv_buf_size": 4096, 00:15:03.528 "send_buf_size": 4096, 00:15:03.528 "enable_recv_pipe": true, 00:15:03.528 "enable_quickack": false, 00:15:03.528 "enable_placement_id": 0, 00:15:03.528 "enable_zerocopy_send_server": true, 00:15:03.528 "enable_zerocopy_send_client": false, 00:15:03.528 "zerocopy_threshold": 0, 00:15:03.528 "tls_version": 0, 00:15:03.528 "enable_ktls": false 00:15:03.528 } 00:15:03.528 } 00:15:03.528 ] 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "subsystem": "vmd", 00:15:03.528 "config": [] 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "subsystem": "accel", 00:15:03.528 "config": [ 00:15:03.528 { 00:15:03.528 "method": "accel_set_options", 00:15:03.528 "params": { 00:15:03.528 "small_cache_size": 128, 00:15:03.528 "large_cache_size": 16, 00:15:03.528 "task_count": 2048, 00:15:03.528 "sequence_count": 2048, 00:15:03.528 "buf_count": 2048 00:15:03.528 } 00:15:03.528 } 00:15:03.528 ] 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "subsystem": "bdev", 00:15:03.528 "config": [ 00:15:03.528 { 00:15:03.528 "method": "bdev_set_options", 00:15:03.528 "params": { 00:15:03.528 "bdev_io_pool_size": 65535, 00:15:03.528 "bdev_io_cache_size": 256, 00:15:03.528 "bdev_auto_examine": true, 00:15:03.528 "iobuf_small_cache_size": 128, 00:15:03.528 "iobuf_large_cache_size": 16 00:15:03.528 } 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "method": "bdev_raid_set_options", 00:15:03.528 "params": { 00:15:03.528 "process_window_size_kb": 1024 00:15:03.528 } 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "method": "bdev_iscsi_set_options", 00:15:03.528 "params": { 00:15:03.528 "timeout_sec": 30 00:15:03.528 } 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "method": "bdev_nvme_set_options", 00:15:03.528 "params": { 00:15:03.528 "action_on_timeout": "none", 00:15:03.528 "timeout_us": 0, 00:15:03.528 "timeout_admin_us": 0, 00:15:03.528 "keep_alive_timeout_ms": 10000, 00:15:03.528 "arbitration_burst": 0, 00:15:03.528 "low_priority_weight": 0, 00:15:03.528 "medium_priority_weight": 0, 00:15:03.528 "high_priority_weight": 0, 00:15:03.528 "nvme_adminq_poll_period_us": 10000, 00:15:03.528 "nvme_ioq_poll_period_us": 0, 00:15:03.528 "io_queue_requests": 0, 00:15:03.528 "delay_cmd_submit": true, 00:15:03.528 "transport_retry_count": 4, 00:15:03.528 "bdev_retry_count": 3, 00:15:03.528 "transport_ack_timeout": 0, 00:15:03.528 "ctrlr_loss_timeout_sec": 0, 00:15:03.528 "reconnect_delay_sec": 0, 00:15:03.528 "fast_io_fail_timeout_sec": 0, 00:15:03.528 "disable_auto_failback": false, 00:15:03.528 "generate_uuids": false, 00:15:03.528 "transport_tos": 0, 00:15:03.528 "nvme_error_stat": false, 00:15:03.528 "rdma_srq_size": 0, 00:15:03.528 "io_path_stat": false, 00:15:03.528 "allow_accel_sequence": false, 00:15:03.528 "rdma_max_cq_size": 0, 00:15:03.528 "rdma_cm_event_timeout_ms": 0, 00:15:03.528 "dhchap_digests": [ 00:15:03.528 "sha256", 00:15:03.528 "sha384", 00:15:03.528 "sha512" 00:15:03.528 ], 00:15:03.528 "dhchap_dhgroups": [ 00:15:03.528 "null", 00:15:03.528 "ffdhe2048", 00:15:03.528 "ffdhe3072", 00:15:03.528 "ffdhe4096", 00:15:03.528 "ffdhe6144", 00:15:03.528 "ffdhe8192" 00:15:03.528 ] 00:15:03.528 } 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "method": "bdev_nvme_set_hotplug", 00:15:03.528 "params": { 00:15:03.528 "period_us": 100000, 00:15:03.528 "enable": false 00:15:03.528 } 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "method": "bdev_malloc_create", 00:15:03.528 "params": { 00:15:03.528 "name": "malloc0", 00:15:03.528 "num_blocks": 8192, 00:15:03.528 "block_size": 4096, 00:15:03.528 "physical_block_size": 4096, 00:15:03.528 "uuid": "a2c86915-d1ff-4e7b-a4e4-dc41fe7f7cfa", 00:15:03.528 "optimal_io_boundary": 0 00:15:03.528 } 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "method": "bdev_wait_for_examine" 00:15:03.528 } 00:15:03.528 ] 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "subsystem": "nbd", 00:15:03.528 "config": [] 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "subsystem": "scheduler", 00:15:03.528 "config": [ 00:15:03.528 { 00:15:03.528 "method": "framework_set_scheduler", 00:15:03.528 "params": { 00:15:03.528 "name": "static" 00:15:03.528 } 00:15:03.528 } 00:15:03.528 ] 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "subsystem": "nvmf", 00:15:03.528 "config": [ 00:15:03.528 { 00:15:03.528 "method": "nvmf_set_config", 00:15:03.528 "params": { 00:15:03.528 "discovery_filter": "match_any", 00:15:03.528 "admin_cmd_passthru": { 00:15:03.528 "identify_ctrlr": false 00:15:03.528 } 00:15:03.528 } 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "method": "nvmf_set_max_subsystems", 00:15:03.528 "params": { 00:15:03.528 "max_subsystems": 1024 00:15:03.528 } 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "method": "nvmf_set_crdt", 00:15:03.528 "params": { 00:15:03.528 "crdt1": 0, 00:15:03.528 "crdt2": 0, 00:15:03.528 "crdt3": 0 00:15:03.528 } 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "method": "nvmf_create_transport", 00:15:03.528 "params": { 00:15:03.528 "trtype": "TCP", 00:15:03.528 "max_queue_depth": 128, 00:15:03.528 "max_io_qpairs_per_ctrlr": 127, 00:15:03.528 "in_capsule_data_size": 4096, 00:15:03.528 "max_io_size": 131072, 00:15:03.528 "io_unit_size": 131072, 00:15:03.528 "max_aq_depth": 128, 00:15:03.528 "num_shared_buffers": 511, 00:15:03.528 "buf_cache_size": 4294967295, 00:15:03.528 "dif_insert_or_strip": false, 00:15:03.528 "zcopy": false, 00:15:03.528 "c2h_success": false, 00:15:03.528 "sock_priority": 0, 00:15:03.528 "abort_timeout_sec": 1, 00:15:03.528 "ack_timeout": 0 00:15:03.528 } 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "method": "nvmf_create_subsystem", 00:15:03.528 "params": { 00:15:03.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.528 "allow_any_host": false, 00:15:03.528 "serial_number": "00000000000000000000", 00:15:03.528 "model_number": "SPDK bdev Controller", 00:15:03.528 "max_namespaces": 32, 00:15:03.528 "min_cntlid": 1, 00:15:03.528 "max_cntlid": 65519, 00:15:03.528 "ana_reporting": false 00:15:03.528 } 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "method": "nvmf_subsystem_add_host", 00:15:03.528 "params": { 00:15:03.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.528 "host": "nqn.2016-06.io.spdk:host1", 00:15:03.528 "psk": "key0" 00:15:03.528 } 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "method": "nvmf_subsystem_add_ns", 00:15:03.528 "params": { 00:15:03.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.528 "namespace": { 00:15:03.528 "nsid": 1, 00:15:03.528 "bdev_name": "malloc0", 00:15:03.528 "nguid": "A2C86915D1FF4E7BA4E4DC41FE7F7CFA", 00:15:03.528 "uuid": "a2c86915-d1ff-4e7b-a4e4-dc41fe7f7cfa", 00:15:03.528 "no_auto_visible": false 00:15:03.528 } 00:15:03.528 } 00:15:03.528 }, 00:15:03.528 { 00:15:03.528 "method": "nvmf_subsystem_add_listener", 00:15:03.528 "params": { 00:15:03.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.528 "listen_address": { 00:15:03.528 "trtype": "TCP", 00:15:03.528 "adrfam": "IPv4", 00:15:03.528 "traddr": "10.0.0.2", 00:15:03.528 "trsvcid": "4420" 00:15:03.528 }, 00:15:03.528 "secure_channel": true 00:15:03.528 } 00:15:03.528 } 00:15:03.528 ] 00:15:03.528 } 00:15:03.528 ] 00:15:03.528 }' 00:15:03.528 14:19:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:03.528 14:19:44 -- common/autotest_common.sh@10 -- # set +x 00:15:03.528 14:19:44 -- nvmf/common.sh@470 -- # nvmfpid=3157375 00:15:03.529 14:19:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:03.529 14:19:44 -- nvmf/common.sh@471 -- # waitforlisten 3157375 00:15:03.529 14:19:44 -- common/autotest_common.sh@817 -- # '[' -z 3157375 ']' 00:15:03.529 14:19:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.529 14:19:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:03.529 14:19:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.529 14:19:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:03.529 14:19:44 -- common/autotest_common.sh@10 -- # set +x 00:15:03.529 [2024-04-26 14:19:44.897891] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:15:03.529 [2024-04-26 14:19:44.897980] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.529 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.529 [2024-04-26 14:19:44.961542] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.529 [2024-04-26 14:19:45.075035] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.529 [2024-04-26 14:19:45.075099] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.529 [2024-04-26 14:19:45.075114] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.529 [2024-04-26 14:19:45.075128] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.529 [2024-04-26 14:19:45.075140] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.529 [2024-04-26 14:19:45.075230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.787 [2024-04-26 14:19:45.294799] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.787 [2024-04-26 14:19:45.326814] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:03.787 [2024-04-26 14:19:45.340829] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.355 14:19:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:04.355 14:19:45 -- common/autotest_common.sh@850 -- # return 0 00:15:04.355 14:19:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:04.355 14:19:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:04.355 14:19:45 -- common/autotest_common.sh@10 -- # set +x 00:15:04.355 14:19:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.355 14:19:45 -- target/tls.sh@272 -- # bdevperf_pid=3157497 00:15:04.355 14:19:45 -- target/tls.sh@273 -- # waitforlisten 3157497 /var/tmp/bdevperf.sock 00:15:04.355 14:19:45 -- common/autotest_common.sh@817 -- # '[' -z 3157497 ']' 00:15:04.355 14:19:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:04.355 14:19:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:04.355 14:19:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:04.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:04.355 14:19:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:04.355 14:19:45 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:04.355 14:19:45 -- common/autotest_common.sh@10 -- # set +x 00:15:04.355 14:19:45 -- target/tls.sh@270 -- # echo '{ 00:15:04.355 "subsystems": [ 00:15:04.355 { 00:15:04.355 "subsystem": "keyring", 00:15:04.355 "config": [ 00:15:04.355 { 00:15:04.355 "method": "keyring_file_add_key", 00:15:04.355 "params": { 00:15:04.355 "name": "key0", 00:15:04.355 "path": "/tmp/tmp.5IrwQBrtvl" 00:15:04.355 } 00:15:04.355 } 00:15:04.355 ] 00:15:04.355 }, 00:15:04.355 { 00:15:04.355 "subsystem": "iobuf", 00:15:04.355 "config": [ 00:15:04.355 { 00:15:04.355 "method": "iobuf_set_options", 00:15:04.355 "params": { 00:15:04.355 "small_pool_count": 8192, 00:15:04.355 "large_pool_count": 1024, 00:15:04.355 "small_bufsize": 8192, 00:15:04.355 "large_bufsize": 135168 00:15:04.355 } 00:15:04.355 } 00:15:04.355 ] 00:15:04.355 }, 00:15:04.355 { 00:15:04.355 "subsystem": "sock", 00:15:04.355 "config": [ 00:15:04.355 { 00:15:04.355 "method": "sock_impl_set_options", 00:15:04.355 "params": { 00:15:04.355 "impl_name": "posix", 00:15:04.355 "recv_buf_size": 2097152, 00:15:04.355 "send_buf_size": 2097152, 00:15:04.355 "enable_recv_pipe": true, 00:15:04.355 "enable_quickack": false, 00:15:04.355 "enable_placement_id": 0, 00:15:04.355 "enable_zerocopy_send_server": true, 00:15:04.355 "enable_zerocopy_send_client": false, 00:15:04.355 "zerocopy_threshold": 0, 00:15:04.355 "tls_version": 0, 00:15:04.355 "enable_ktls": false 00:15:04.355 } 00:15:04.355 }, 00:15:04.355 { 00:15:04.355 "method": "sock_impl_set_options", 00:15:04.355 "params": { 00:15:04.355 "impl_name": "ssl", 00:15:04.356 "recv_buf_size": 4096, 00:15:04.356 "send_buf_size": 4096, 00:15:04.356 "enable_recv_pipe": true, 00:15:04.356 "enable_quickack": false, 00:15:04.356 "enable_placement_id": 0, 00:15:04.356 "enable_zerocopy_send_server": true, 00:15:04.356 "enable_zerocopy_send_client": false, 00:15:04.356 "zerocopy_threshold": 0, 00:15:04.356 "tls_version": 0, 00:15:04.356 "enable_ktls": false 00:15:04.356 } 00:15:04.356 } 00:15:04.356 ] 00:15:04.356 }, 00:15:04.356 { 00:15:04.356 "subsystem": "vmd", 00:15:04.356 "config": [] 00:15:04.356 }, 00:15:04.356 { 00:15:04.356 "subsystem": "accel", 00:15:04.356 "config": [ 00:15:04.356 { 00:15:04.356 "method": "accel_set_options", 00:15:04.356 "params": { 00:15:04.356 "small_cache_size": 128, 00:15:04.356 "large_cache_size": 16, 00:15:04.356 "task_count": 2048, 00:15:04.356 "sequence_count": 2048, 00:15:04.356 "buf_count": 2048 00:15:04.356 } 00:15:04.356 } 00:15:04.356 ] 00:15:04.356 }, 00:15:04.356 { 00:15:04.356 "subsystem": "bdev", 00:15:04.356 "config": [ 00:15:04.356 { 00:15:04.356 "method": "bdev_set_options", 00:15:04.356 "params": { 00:15:04.356 "bdev_io_pool_size": 65535, 00:15:04.356 "bdev_io_cache_size": 256, 00:15:04.356 "bdev_auto_examine": true, 00:15:04.356 "iobuf_small_cache_size": 128, 00:15:04.356 "iobuf_large_cache_size": 16 00:15:04.356 } 00:15:04.356 }, 00:15:04.356 { 00:15:04.356 "method": "bdev_raid_set_options", 00:15:04.356 "params": { 00:15:04.356 "process_window_size_kb": 1024 00:15:04.356 } 00:15:04.356 }, 00:15:04.356 { 00:15:04.356 "method": "bdev_iscsi_set_options", 00:15:04.356 "params": { 00:15:04.356 "timeout_sec": 30 00:15:04.356 } 00:15:04.356 }, 00:15:04.356 { 00:15:04.356 "method": "bdev_nvme_set_options", 00:15:04.356 "params": { 00:15:04.356 "action_on_timeout": "none", 00:15:04.356 "timeout_us": 0, 00:15:04.356 "timeout_admin_us": 0, 00:15:04.356 "keep_alive_timeout_ms": 10000, 00:15:04.356 "arbitration_burst": 0, 00:15:04.356 "low_priority_weight": 0, 00:15:04.356 "medium_priority_weight": 0, 00:15:04.356 "high_priority_weight": 0, 00:15:04.356 "nvme_adminq_poll_period_us": 10000, 00:15:04.356 "nvme_ioq_poll_period_us": 0, 00:15:04.356 "io_queue_requests": 512, 00:15:04.356 "delay_cmd_submit": true, 00:15:04.356 "transport_retry_count": 4, 00:15:04.356 "bdev_retry_count": 3, 00:15:04.356 "transport_ack_timeout": 0, 00:15:04.356 "ctrlr_loss_timeout_sec": 0, 00:15:04.356 "reconnect_delay_sec": 0, 00:15:04.356 "fast_io_fail_timeout_sec": 0, 00:15:04.356 "disable_auto_failback": false, 00:15:04.356 "generate_uuids": false, 00:15:04.356 "transport_tos": 0, 00:15:04.356 "nvme_error_stat": false, 00:15:04.356 "rdma_srq_size": 0, 00:15:04.356 "io_path_stat": false, 00:15:04.356 "allow_accel_sequence": false, 00:15:04.356 "rdma_max_cq_size": 0, 00:15:04.356 "rdma_cm_event_timeout_ms": 0, 00:15:04.356 "dhchap_digests": [ 00:15:04.356 "sha256", 00:15:04.356 "sha384", 00:15:04.356 "sha512" 00:15:04.356 ], 00:15:04.356 "dhchap_dhgroups": [ 00:15:04.356 "null", 00:15:04.356 "ffdhe2048", 00:15:04.356 "ffdhe3072", 00:15:04.356 "ffdhe4096", 00:15:04.356 "ffdhe6144", 00:15:04.356 "ffdhe8192" 00:15:04.356 ] 00:15:04.356 } 00:15:04.356 }, 00:15:04.356 { 00:15:04.356 "method": "bdev_nvme_attach_controller", 00:15:04.356 "params": { 00:15:04.356 "name": "nvme0", 00:15:04.356 "trtype": "TCP", 00:15:04.356 "adrfam": "IPv4", 00:15:04.356 "traddr": "10.0.0.2", 00:15:04.356 "trsvcid": "4420", 00:15:04.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.356 "prchk_reftag": false, 00:15:04.356 "prchk_guard": false, 00:15:04.356 "ctrlr_loss_timeout_sec": 0, 00:15:04.356 "reconnect_delay_sec": 0, 00:15:04.356 "fast_io_fail_timeout_sec": 0, 00:15:04.356 "psk": "key0", 00:15:04.356 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:04.356 "hdgst": false, 00:15:04.356 "ddgst": false 00:15:04.356 } 00:15:04.356 }, 00:15:04.356 { 00:15:04.356 "method": "bdev_nvme_set_hotplug", 00:15:04.356 "params": { 00:15:04.356 "period_us": 100000, 00:15:04.356 "enable": false 00:15:04.356 } 00:15:04.356 }, 00:15:04.356 { 00:15:04.356 "method": "bdev_enable_histogram", 00:15:04.356 "params": { 00:15:04.356 "name": "nvme0n1", 00:15:04.356 "enable": true 00:15:04.356 } 00:15:04.356 }, 00:15:04.356 { 00:15:04.356 "method": "bdev_wait_for_examine" 00:15:04.356 } 00:15:04.356 ] 00:15:04.356 }, 00:15:04.356 { 00:15:04.356 "subsystem": "nbd", 00:15:04.356 "config": [] 00:15:04.356 } 00:15:04.356 ] 00:15:04.356 }' 00:15:04.614 [2024-04-26 14:19:45.964150] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:15:04.614 [2024-04-26 14:19:45.964245] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3157497 ] 00:15:04.614 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.614 [2024-04-26 14:19:46.019110] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.614 [2024-04-26 14:19:46.136628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.872 [2024-04-26 14:19:46.298985] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:05.439 14:19:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:05.439 14:19:46 -- common/autotest_common.sh@850 -- # return 0 00:15:05.439 14:19:46 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:05.439 14:19:46 -- target/tls.sh@275 -- # jq -r '.[].name' 00:15:06.005 14:19:47 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.005 14:19:47 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:06.005 Running I/O for 1 seconds... 00:15:06.938 00:15:06.938 Latency(us) 00:15:06.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.938 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:06.938 Verification LBA range: start 0x0 length 0x2000 00:15:06.938 nvme0n1 : 1.03 2964.57 11.58 0.00 0.00 42495.52 7427.41 60972.75 00:15:06.938 =================================================================================================================== 00:15:06.938 Total : 2964.57 11.58 0.00 0.00 42495.52 7427.41 60972.75 00:15:06.938 0 00:15:06.938 14:19:48 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:15:06.938 14:19:48 -- target/tls.sh@279 -- # cleanup 00:15:06.938 14:19:48 -- target/tls.sh@15 -- # process_shm --id 0 00:15:06.938 14:19:48 -- common/autotest_common.sh@794 -- # type=--id 00:15:06.938 14:19:48 -- common/autotest_common.sh@795 -- # id=0 00:15:06.938 14:19:48 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:06.938 14:19:48 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:06.938 14:19:48 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:06.938 14:19:48 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:06.938 14:19:48 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:06.938 14:19:48 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:06.938 nvmf_trace.0 00:15:07.196 14:19:48 -- common/autotest_common.sh@809 -- # return 0 00:15:07.196 14:19:48 -- target/tls.sh@16 -- # killprocess 3157497 00:15:07.196 14:19:48 -- common/autotest_common.sh@936 -- # '[' -z 3157497 ']' 00:15:07.196 14:19:48 -- common/autotest_common.sh@940 -- # kill -0 3157497 00:15:07.196 14:19:48 -- common/autotest_common.sh@941 -- # uname 00:15:07.196 14:19:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:07.196 14:19:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3157497 00:15:07.196 14:19:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:07.196 14:19:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:07.196 14:19:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3157497' 00:15:07.196 killing process with pid 3157497 00:15:07.196 14:19:48 -- common/autotest_common.sh@955 -- # kill 3157497 00:15:07.196 Received shutdown signal, test time was about 1.000000 seconds 00:15:07.196 00:15:07.196 Latency(us) 00:15:07.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.196 =================================================================================================================== 00:15:07.196 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:07.196 14:19:48 -- common/autotest_common.sh@960 -- # wait 3157497 00:15:07.196 14:19:48 -- target/tls.sh@17 -- # nvmftestfini 00:15:07.196 14:19:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:07.196 14:19:48 -- nvmf/common.sh@117 -- # sync 00:15:07.196 14:19:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:07.196 14:19:48 -- nvmf/common.sh@120 -- # set +e 00:15:07.196 14:19:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:07.196 14:19:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:07.196 rmmod nvme_tcp 00:15:07.454 rmmod nvme_fabrics 00:15:07.454 rmmod nvme_keyring 00:15:07.454 14:19:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:07.454 14:19:48 -- nvmf/common.sh@124 -- # set -e 00:15:07.454 14:19:48 -- nvmf/common.sh@125 -- # return 0 00:15:07.454 14:19:48 -- nvmf/common.sh@478 -- # '[' -n 3157375 ']' 00:15:07.454 14:19:48 -- nvmf/common.sh@479 -- # killprocess 3157375 00:15:07.454 14:19:48 -- common/autotest_common.sh@936 -- # '[' -z 3157375 ']' 00:15:07.454 14:19:48 -- common/autotest_common.sh@940 -- # kill -0 3157375 00:15:07.454 14:19:48 -- common/autotest_common.sh@941 -- # uname 00:15:07.454 14:19:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:07.454 14:19:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3157375 00:15:07.454 14:19:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:07.454 14:19:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:07.454 14:19:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3157375' 00:15:07.454 killing process with pid 3157375 00:15:07.454 14:19:48 -- common/autotest_common.sh@955 -- # kill 3157375 00:15:07.454 14:19:48 -- common/autotest_common.sh@960 -- # wait 3157375 00:15:07.714 14:19:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:07.714 14:19:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:07.714 14:19:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:07.714 14:19:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:07.714 14:19:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:07.714 14:19:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.714 14:19:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.714 14:19:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.618 14:19:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:09.618 14:19:51 -- target/tls.sh@18 -- # rm -f /tmp/tmp.2fkpFBchYI /tmp/tmp.lKB2XfFYFL /tmp/tmp.5IrwQBrtvl 00:15:09.618 00:15:09.618 real 1m20.403s 00:15:09.618 user 2m14.879s 00:15:09.618 sys 0m23.307s 00:15:09.618 14:19:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:09.618 14:19:51 -- common/autotest_common.sh@10 -- # set +x 00:15:09.618 ************************************ 00:15:09.618 END TEST nvmf_tls 00:15:09.618 ************************************ 00:15:09.618 14:19:51 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:09.618 14:19:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:09.618 14:19:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:09.618 14:19:51 -- common/autotest_common.sh@10 -- # set +x 00:15:09.877 ************************************ 00:15:09.877 START TEST nvmf_fips 00:15:09.877 ************************************ 00:15:09.877 14:19:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:09.877 * Looking for test storage... 00:15:09.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:15:09.877 14:19:51 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:09.877 14:19:51 -- nvmf/common.sh@7 -- # uname -s 00:15:09.877 14:19:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.877 14:19:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.877 14:19:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.877 14:19:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.877 14:19:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.877 14:19:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.877 14:19:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.877 14:19:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.877 14:19:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.877 14:19:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.877 14:19:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:09.877 14:19:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:15:09.877 14:19:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.877 14:19:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.877 14:19:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:09.877 14:19:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.877 14:19:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:09.877 14:19:51 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.877 14:19:51 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.877 14:19:51 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.877 14:19:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.877 14:19:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.877 14:19:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.877 14:19:51 -- paths/export.sh@5 -- # export PATH 00:15:09.877 14:19:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.877 14:19:51 -- nvmf/common.sh@47 -- # : 0 00:15:09.877 14:19:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:09.877 14:19:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:09.877 14:19:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.877 14:19:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.877 14:19:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.877 14:19:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:09.877 14:19:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:09.877 14:19:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:09.877 14:19:51 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:09.877 14:19:51 -- fips/fips.sh@89 -- # check_openssl_version 00:15:09.877 14:19:51 -- fips/fips.sh@83 -- # local target=3.0.0 00:15:09.877 14:19:51 -- fips/fips.sh@85 -- # openssl version 00:15:09.877 14:19:51 -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:09.877 14:19:51 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:09.877 14:19:51 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:09.877 14:19:51 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:09.877 14:19:51 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:09.877 14:19:51 -- scripts/common.sh@333 -- # IFS=.-: 00:15:09.877 14:19:51 -- scripts/common.sh@333 -- # read -ra ver1 00:15:09.877 14:19:51 -- scripts/common.sh@334 -- # IFS=.-: 00:15:09.877 14:19:51 -- scripts/common.sh@334 -- # read -ra ver2 00:15:09.877 14:19:51 -- scripts/common.sh@335 -- # local 'op=>=' 00:15:09.877 14:19:51 -- scripts/common.sh@337 -- # ver1_l=3 00:15:09.877 14:19:51 -- scripts/common.sh@338 -- # ver2_l=3 00:15:09.877 14:19:51 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:09.877 14:19:51 -- scripts/common.sh@341 -- # case "$op" in 00:15:09.877 14:19:51 -- scripts/common.sh@345 -- # : 1 00:15:09.877 14:19:51 -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:09.877 14:19:51 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:09.877 14:19:51 -- scripts/common.sh@362 -- # decimal 3 00:15:09.877 14:19:51 -- scripts/common.sh@350 -- # local d=3 00:15:09.877 14:19:51 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:09.877 14:19:51 -- scripts/common.sh@352 -- # echo 3 00:15:09.877 14:19:51 -- scripts/common.sh@362 -- # ver1[v]=3 00:15:09.877 14:19:51 -- scripts/common.sh@363 -- # decimal 3 00:15:09.877 14:19:51 -- scripts/common.sh@350 -- # local d=3 00:15:09.877 14:19:51 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:09.877 14:19:51 -- scripts/common.sh@352 -- # echo 3 00:15:09.877 14:19:51 -- scripts/common.sh@363 -- # ver2[v]=3 00:15:09.877 14:19:51 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:09.877 14:19:51 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:09.877 14:19:51 -- scripts/common.sh@361 -- # (( v++ )) 00:15:09.877 14:19:51 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:09.877 14:19:51 -- scripts/common.sh@362 -- # decimal 0 00:15:09.877 14:19:51 -- scripts/common.sh@350 -- # local d=0 00:15:09.877 14:19:51 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:09.877 14:19:51 -- scripts/common.sh@352 -- # echo 0 00:15:09.877 14:19:51 -- scripts/common.sh@362 -- # ver1[v]=0 00:15:09.877 14:19:51 -- scripts/common.sh@363 -- # decimal 0 00:15:09.877 14:19:51 -- scripts/common.sh@350 -- # local d=0 00:15:09.877 14:19:51 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:09.877 14:19:51 -- scripts/common.sh@352 -- # echo 0 00:15:09.877 14:19:51 -- scripts/common.sh@363 -- # ver2[v]=0 00:15:09.877 14:19:51 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:09.877 14:19:51 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:09.877 14:19:51 -- scripts/common.sh@361 -- # (( v++ )) 00:15:09.877 14:19:51 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:09.877 14:19:51 -- scripts/common.sh@362 -- # decimal 9 00:15:09.877 14:19:51 -- scripts/common.sh@350 -- # local d=9 00:15:09.877 14:19:51 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:09.877 14:19:51 -- scripts/common.sh@352 -- # echo 9 00:15:09.877 14:19:51 -- scripts/common.sh@362 -- # ver1[v]=9 00:15:09.877 14:19:51 -- scripts/common.sh@363 -- # decimal 0 00:15:09.877 14:19:51 -- scripts/common.sh@350 -- # local d=0 00:15:09.877 14:19:51 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:09.877 14:19:51 -- scripts/common.sh@352 -- # echo 0 00:15:09.877 14:19:51 -- scripts/common.sh@363 -- # ver2[v]=0 00:15:09.877 14:19:51 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:09.877 14:19:51 -- scripts/common.sh@364 -- # return 0 00:15:09.877 14:19:51 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:09.877 14:19:51 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:09.877 14:19:51 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:09.878 14:19:51 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:09.878 14:19:51 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:09.878 14:19:51 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:09.878 14:19:51 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:09.878 14:19:51 -- fips/fips.sh@113 -- # build_openssl_config 00:15:09.878 14:19:51 -- fips/fips.sh@37 -- # cat 00:15:09.878 14:19:51 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:09.878 14:19:51 -- fips/fips.sh@58 -- # cat - 00:15:09.878 14:19:51 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:09.878 14:19:51 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:09.878 14:19:51 -- fips/fips.sh@116 -- # mapfile -t providers 00:15:09.878 14:19:51 -- fips/fips.sh@116 -- # openssl list -providers 00:15:09.878 14:19:51 -- fips/fips.sh@116 -- # grep name 00:15:09.878 14:19:51 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:09.878 14:19:51 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:09.878 14:19:51 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:09.878 14:19:51 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:09.878 14:19:51 -- fips/fips.sh@127 -- # : 00:15:09.878 14:19:51 -- common/autotest_common.sh@638 -- # local es=0 00:15:09.878 14:19:51 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:09.878 14:19:51 -- common/autotest_common.sh@626 -- # local arg=openssl 00:15:09.878 14:19:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:09.878 14:19:51 -- common/autotest_common.sh@630 -- # type -t openssl 00:15:09.878 14:19:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:09.878 14:19:51 -- common/autotest_common.sh@632 -- # type -P openssl 00:15:09.878 14:19:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:09.878 14:19:51 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:15:09.878 14:19:51 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:15:09.878 14:19:51 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:15:10.137 Error setting digest 00:15:10.137 00127657877F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:10.137 00127657877F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:10.137 14:19:51 -- common/autotest_common.sh@641 -- # es=1 00:15:10.137 14:19:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:10.137 14:19:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:10.137 14:19:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:10.137 14:19:51 -- fips/fips.sh@130 -- # nvmftestinit 00:15:10.137 14:19:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:10.137 14:19:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.137 14:19:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:10.137 14:19:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:10.137 14:19:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:10.137 14:19:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.137 14:19:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.137 14:19:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.137 14:19:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:10.137 14:19:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:10.137 14:19:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:10.137 14:19:51 -- common/autotest_common.sh@10 -- # set +x 00:15:11.513 14:19:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:11.513 14:19:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:11.513 14:19:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:11.513 14:19:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:11.513 14:19:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:11.513 14:19:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:11.513 14:19:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:11.513 14:19:53 -- nvmf/common.sh@295 -- # net_devs=() 00:15:11.513 14:19:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:11.513 14:19:53 -- nvmf/common.sh@296 -- # e810=() 00:15:11.513 14:19:53 -- nvmf/common.sh@296 -- # local -ga e810 00:15:11.513 14:19:53 -- nvmf/common.sh@297 -- # x722=() 00:15:11.513 14:19:53 -- nvmf/common.sh@297 -- # local -ga x722 00:15:11.513 14:19:53 -- nvmf/common.sh@298 -- # mlx=() 00:15:11.513 14:19:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:11.513 14:19:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:11.513 14:19:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:11.513 14:19:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:11.513 14:19:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:11.513 14:19:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:11.513 14:19:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:11.513 14:19:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:11.513 14:19:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:11.513 14:19:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:11.513 14:19:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:11.513 14:19:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:11.513 14:19:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:11.513 14:19:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:11.513 14:19:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:11.513 14:19:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:11.513 14:19:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:11.513 14:19:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:11.513 14:19:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.513 14:19:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:15:11.513 Found 0000:08:00.0 (0x8086 - 0x159b) 00:15:11.513 14:19:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.513 14:19:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.513 14:19:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.513 14:19:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.513 14:19:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.513 14:19:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.513 14:19:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:15:11.513 Found 0000:08:00.1 (0x8086 - 0x159b) 00:15:11.513 14:19:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.513 14:19:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.513 14:19:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.513 14:19:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.513 14:19:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.513 14:19:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:11.513 14:19:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:11.513 14:19:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:11.513 14:19:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.513 14:19:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.513 14:19:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:11.513 14:19:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.513 14:19:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:15:11.513 Found net devices under 0000:08:00.0: cvl_0_0 00:15:11.513 14:19:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.513 14:19:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.513 14:19:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.513 14:19:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:11.514 14:19:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.514 14:19:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:15:11.514 Found net devices under 0000:08:00.1: cvl_0_1 00:15:11.514 14:19:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.514 14:19:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:11.514 14:19:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:11.514 14:19:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:11.514 14:19:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:11.514 14:19:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:11.514 14:19:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.514 14:19:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.514 14:19:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:11.514 14:19:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:11.514 14:19:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:11.514 14:19:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:11.514 14:19:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:11.514 14:19:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:11.514 14:19:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.514 14:19:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:11.514 14:19:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:11.514 14:19:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:11.514 14:19:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:11.774 14:19:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:11.774 14:19:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:11.774 14:19:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:11.774 14:19:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:11.774 14:19:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:11.774 14:19:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:11.774 14:19:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:11.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:15:11.774 00:15:11.774 --- 10.0.0.2 ping statistics --- 00:15:11.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.774 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:15:11.774 14:19:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:11.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:15:11.774 00:15:11.774 --- 10.0.0.1 ping statistics --- 00:15:11.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.774 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:15:11.774 14:19:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.774 14:19:53 -- nvmf/common.sh@411 -- # return 0 00:15:11.774 14:19:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:11.774 14:19:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.774 14:19:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:11.774 14:19:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:11.774 14:19:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.774 14:19:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:11.774 14:19:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:11.774 14:19:53 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:11.774 14:19:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:11.774 14:19:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:11.774 14:19:53 -- common/autotest_common.sh@10 -- # set +x 00:15:11.774 14:19:53 -- nvmf/common.sh@470 -- # nvmfpid=3159246 00:15:11.774 14:19:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:11.774 14:19:53 -- nvmf/common.sh@471 -- # waitforlisten 3159246 00:15:11.774 14:19:53 -- common/autotest_common.sh@817 -- # '[' -z 3159246 ']' 00:15:11.774 14:19:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.775 14:19:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:11.775 14:19:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.775 14:19:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:11.775 14:19:53 -- common/autotest_common.sh@10 -- # set +x 00:15:11.775 [2024-04-26 14:19:53.277740] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:15:11.775 [2024-04-26 14:19:53.277833] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.775 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.775 [2024-04-26 14:19:53.341920] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.032 [2024-04-26 14:19:53.456406] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.032 [2024-04-26 14:19:53.456456] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.032 [2024-04-26 14:19:53.456471] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.032 [2024-04-26 14:19:53.456485] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.032 [2024-04-26 14:19:53.456497] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.032 [2024-04-26 14:19:53.456526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.032 14:19:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:12.032 14:19:53 -- common/autotest_common.sh@850 -- # return 0 00:15:12.032 14:19:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:12.032 14:19:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:12.032 14:19:53 -- common/autotest_common.sh@10 -- # set +x 00:15:12.032 14:19:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.032 14:19:53 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:12.032 14:19:53 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:12.032 14:19:53 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:12.032 14:19:53 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:12.032 14:19:53 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:12.032 14:19:53 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:12.032 14:19:53 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:12.032 14:19:53 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.597 [2024-04-26 14:19:53.872975] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.597 [2024-04-26 14:19:53.888937] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:12.597 [2024-04-26 14:19:53.889143] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.597 [2024-04-26 14:19:53.919532] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:12.597 malloc0 00:15:12.597 14:19:53 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:12.597 14:19:53 -- fips/fips.sh@147 -- # bdevperf_pid=3159366 00:15:12.597 14:19:53 -- fips/fips.sh@148 -- # waitforlisten 3159366 /var/tmp/bdevperf.sock 00:15:12.597 14:19:53 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:12.597 14:19:53 -- common/autotest_common.sh@817 -- # '[' -z 3159366 ']' 00:15:12.597 14:19:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:12.597 14:19:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:12.597 14:19:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:12.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:12.597 14:19:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:12.597 14:19:53 -- common/autotest_common.sh@10 -- # set +x 00:15:12.597 [2024-04-26 14:19:54.024853] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:15:12.597 [2024-04-26 14:19:54.024957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159366 ] 00:15:12.597 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.597 [2024-04-26 14:19:54.084901] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.854 [2024-04-26 14:19:54.199602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.854 14:19:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:12.854 14:19:54 -- common/autotest_common.sh@850 -- # return 0 00:15:12.854 14:19:54 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:13.111 [2024-04-26 14:19:54.575476] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:13.111 [2024-04-26 14:19:54.575600] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:13.111 TLSTESTn1 00:15:13.111 14:19:54 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:13.368 Running I/O for 10 seconds... 00:15:23.332 00:15:23.332 Latency(us) 00:15:23.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.332 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:23.332 Verification LBA range: start 0x0 length 0x2000 00:15:23.332 TLSTESTn1 : 10.03 2930.89 11.45 0.00 0.00 43566.48 8738.13 46797.56 00:15:23.332 =================================================================================================================== 00:15:23.332 Total : 2930.89 11.45 0.00 0.00 43566.48 8738.13 46797.56 00:15:23.332 0 00:15:23.332 14:20:04 -- fips/fips.sh@1 -- # cleanup 00:15:23.332 14:20:04 -- fips/fips.sh@15 -- # process_shm --id 0 00:15:23.332 14:20:04 -- common/autotest_common.sh@794 -- # type=--id 00:15:23.332 14:20:04 -- common/autotest_common.sh@795 -- # id=0 00:15:23.332 14:20:04 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:23.332 14:20:04 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:23.332 14:20:04 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:23.332 14:20:04 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:23.332 14:20:04 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:23.332 14:20:04 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:23.332 nvmf_trace.0 00:15:23.590 14:20:04 -- common/autotest_common.sh@809 -- # return 0 00:15:23.590 14:20:04 -- fips/fips.sh@16 -- # killprocess 3159366 00:15:23.590 14:20:04 -- common/autotest_common.sh@936 -- # '[' -z 3159366 ']' 00:15:23.590 14:20:04 -- common/autotest_common.sh@940 -- # kill -0 3159366 00:15:23.590 14:20:04 -- common/autotest_common.sh@941 -- # uname 00:15:23.590 14:20:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:23.590 14:20:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3159366 00:15:23.590 14:20:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:15:23.590 14:20:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:15:23.590 14:20:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3159366' 00:15:23.590 killing process with pid 3159366 00:15:23.590 14:20:04 -- common/autotest_common.sh@955 -- # kill 3159366 00:15:23.590 Received shutdown signal, test time was about 10.000000 seconds 00:15:23.590 00:15:23.590 Latency(us) 00:15:23.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.590 =================================================================================================================== 00:15:23.590 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:23.590 [2024-04-26 14:20:04.935298] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:23.590 14:20:04 -- common/autotest_common.sh@960 -- # wait 3159366 00:15:23.590 14:20:05 -- fips/fips.sh@17 -- # nvmftestfini 00:15:23.590 14:20:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:23.590 14:20:05 -- nvmf/common.sh@117 -- # sync 00:15:23.590 14:20:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:23.590 14:20:05 -- nvmf/common.sh@120 -- # set +e 00:15:23.590 14:20:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:23.590 14:20:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:23.849 rmmod nvme_tcp 00:15:23.849 rmmod nvme_fabrics 00:15:23.849 rmmod nvme_keyring 00:15:23.849 14:20:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:23.849 14:20:05 -- nvmf/common.sh@124 -- # set -e 00:15:23.849 14:20:05 -- nvmf/common.sh@125 -- # return 0 00:15:23.849 14:20:05 -- nvmf/common.sh@478 -- # '[' -n 3159246 ']' 00:15:23.849 14:20:05 -- nvmf/common.sh@479 -- # killprocess 3159246 00:15:23.849 14:20:05 -- common/autotest_common.sh@936 -- # '[' -z 3159246 ']' 00:15:23.849 14:20:05 -- common/autotest_common.sh@940 -- # kill -0 3159246 00:15:23.849 14:20:05 -- common/autotest_common.sh@941 -- # uname 00:15:23.849 14:20:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:23.849 14:20:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3159246 00:15:23.849 14:20:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:23.849 14:20:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:23.849 14:20:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3159246' 00:15:23.849 killing process with pid 3159246 00:15:23.849 14:20:05 -- common/autotest_common.sh@955 -- # kill 3159246 00:15:23.849 [2024-04-26 14:20:05.217658] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:23.849 14:20:05 -- common/autotest_common.sh@960 -- # wait 3159246 00:15:24.109 14:20:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:24.109 14:20:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:24.109 14:20:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:24.109 14:20:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:24.109 14:20:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:24.109 14:20:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.109 14:20:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:24.109 14:20:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.016 14:20:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:26.016 14:20:07 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:26.016 00:15:26.016 real 0m16.244s 00:15:26.016 user 0m21.433s 00:15:26.016 sys 0m5.288s 00:15:26.016 14:20:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:26.016 14:20:07 -- common/autotest_common.sh@10 -- # set +x 00:15:26.016 ************************************ 00:15:26.016 END TEST nvmf_fips 00:15:26.016 ************************************ 00:15:26.016 14:20:07 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:15:26.016 14:20:07 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:15:26.016 14:20:07 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:15:26.016 14:20:07 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:15:26.016 14:20:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:26.016 14:20:07 -- common/autotest_common.sh@10 -- # set +x 00:15:27.918 14:20:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:27.918 14:20:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:27.918 14:20:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:27.918 14:20:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:27.918 14:20:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:27.918 14:20:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:27.918 14:20:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:27.918 14:20:09 -- nvmf/common.sh@295 -- # net_devs=() 00:15:27.918 14:20:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:27.918 14:20:09 -- nvmf/common.sh@296 -- # e810=() 00:15:27.918 14:20:09 -- nvmf/common.sh@296 -- # local -ga e810 00:15:27.918 14:20:09 -- nvmf/common.sh@297 -- # x722=() 00:15:27.918 14:20:09 -- nvmf/common.sh@297 -- # local -ga x722 00:15:27.918 14:20:09 -- nvmf/common.sh@298 -- # mlx=() 00:15:27.918 14:20:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:27.918 14:20:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:27.918 14:20:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:27.918 14:20:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:27.918 14:20:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:27.918 14:20:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:27.918 14:20:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:27.918 14:20:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:27.918 14:20:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:27.918 14:20:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:27.918 14:20:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:27.918 14:20:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:27.918 14:20:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:27.918 14:20:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:27.918 14:20:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:27.918 14:20:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:27.918 14:20:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:27.918 14:20:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:27.918 14:20:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:27.918 14:20:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:15:27.918 Found 0000:08:00.0 (0x8086 - 0x159b) 00:15:27.918 14:20:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:27.918 14:20:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:27.918 14:20:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:27.919 14:20:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:27.919 14:20:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:27.919 14:20:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:27.919 14:20:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:15:27.919 Found 0000:08:00.1 (0x8086 - 0x159b) 00:15:27.919 14:20:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:27.919 14:20:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:27.919 14:20:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:27.919 14:20:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:27.919 14:20:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:27.919 14:20:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:27.919 14:20:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:27.919 14:20:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:27.919 14:20:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:27.919 14:20:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.919 14:20:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:27.919 14:20:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.919 14:20:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:15:27.919 Found net devices under 0000:08:00.0: cvl_0_0 00:15:27.919 14:20:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.919 14:20:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:27.919 14:20:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.919 14:20:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:27.919 14:20:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.919 14:20:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:15:27.919 Found net devices under 0000:08:00.1: cvl_0_1 00:15:27.919 14:20:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.919 14:20:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:27.919 14:20:09 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:27.919 14:20:09 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:15:27.919 14:20:09 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:15:27.919 14:20:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:27.919 14:20:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:27.919 14:20:09 -- common/autotest_common.sh@10 -- # set +x 00:15:27.919 ************************************ 00:15:27.919 START TEST nvmf_perf_adq 00:15:27.919 ************************************ 00:15:27.919 14:20:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:15:27.919 * Looking for test storage... 00:15:27.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.919 14:20:09 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.919 14:20:09 -- nvmf/common.sh@7 -- # uname -s 00:15:27.919 14:20:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.919 14:20:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.919 14:20:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.919 14:20:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.919 14:20:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.919 14:20:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.919 14:20:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.919 14:20:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.919 14:20:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.919 14:20:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.919 14:20:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:27.919 14:20:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:15:27.919 14:20:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.919 14:20:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.919 14:20:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.919 14:20:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.919 14:20:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.919 14:20:09 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.919 14:20:09 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.919 14:20:09 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.919 14:20:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.919 14:20:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.919 14:20:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.919 14:20:09 -- paths/export.sh@5 -- # export PATH 00:15:27.919 14:20:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.919 14:20:09 -- nvmf/common.sh@47 -- # : 0 00:15:27.919 14:20:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:27.919 14:20:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:27.919 14:20:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.919 14:20:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.919 14:20:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.919 14:20:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:27.919 14:20:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:27.919 14:20:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:27.919 14:20:09 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:15:27.919 14:20:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:27.919 14:20:09 -- common/autotest_common.sh@10 -- # set +x 00:15:29.825 14:20:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:29.825 14:20:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:29.825 14:20:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:29.825 14:20:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:29.825 14:20:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:29.825 14:20:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:29.825 14:20:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:29.825 14:20:10 -- nvmf/common.sh@295 -- # net_devs=() 00:15:29.825 14:20:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:29.825 14:20:10 -- nvmf/common.sh@296 -- # e810=() 00:15:29.825 14:20:10 -- nvmf/common.sh@296 -- # local -ga e810 00:15:29.825 14:20:10 -- nvmf/common.sh@297 -- # x722=() 00:15:29.825 14:20:10 -- nvmf/common.sh@297 -- # local -ga x722 00:15:29.825 14:20:10 -- nvmf/common.sh@298 -- # mlx=() 00:15:29.825 14:20:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:29.825 14:20:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:29.825 14:20:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:29.825 14:20:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:29.825 14:20:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:29.825 14:20:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:29.825 14:20:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:29.825 14:20:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:29.825 14:20:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:29.825 14:20:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:29.825 14:20:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:29.825 14:20:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:29.825 14:20:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:29.825 14:20:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:29.825 14:20:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:29.825 14:20:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:29.825 14:20:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:29.825 14:20:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:29.825 14:20:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.825 14:20:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:15:29.825 Found 0000:08:00.0 (0x8086 - 0x159b) 00:15:29.825 14:20:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:29.825 14:20:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:29.825 14:20:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.825 14:20:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.825 14:20:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:29.825 14:20:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.825 14:20:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:15:29.825 Found 0000:08:00.1 (0x8086 - 0x159b) 00:15:29.825 14:20:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:29.825 14:20:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:29.825 14:20:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.825 14:20:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.825 14:20:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:29.825 14:20:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:29.825 14:20:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:29.825 14:20:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:29.825 14:20:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.825 14:20:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.825 14:20:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:29.825 14:20:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.825 14:20:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:15:29.825 Found net devices under 0000:08:00.0: cvl_0_0 00:15:29.825 14:20:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.825 14:20:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.825 14:20:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.825 14:20:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:29.825 14:20:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.825 14:20:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:15:29.825 Found net devices under 0000:08:00.1: cvl_0_1 00:15:29.825 14:20:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.825 14:20:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:29.825 14:20:10 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:29.825 14:20:10 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:15:29.825 14:20:10 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:15:29.825 14:20:10 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:15:29.825 14:20:10 -- target/perf_adq.sh@52 -- # rmmod ice 00:15:30.082 14:20:11 -- target/perf_adq.sh@53 -- # modprobe ice 00:15:31.457 14:20:12 -- target/perf_adq.sh@54 -- # sleep 5 00:15:36.739 14:20:17 -- target/perf_adq.sh@67 -- # nvmftestinit 00:15:36.739 14:20:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:36.739 14:20:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.739 14:20:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:36.739 14:20:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:36.739 14:20:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:36.739 14:20:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.739 14:20:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.739 14:20:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.739 14:20:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:36.739 14:20:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:36.739 14:20:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:36.739 14:20:17 -- common/autotest_common.sh@10 -- # set +x 00:15:36.739 14:20:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:36.739 14:20:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:36.739 14:20:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:36.739 14:20:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:36.739 14:20:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:36.739 14:20:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:36.739 14:20:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:36.739 14:20:17 -- nvmf/common.sh@295 -- # net_devs=() 00:15:36.739 14:20:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:36.739 14:20:17 -- nvmf/common.sh@296 -- # e810=() 00:15:36.739 14:20:17 -- nvmf/common.sh@296 -- # local -ga e810 00:15:36.739 14:20:17 -- nvmf/common.sh@297 -- # x722=() 00:15:36.739 14:20:17 -- nvmf/common.sh@297 -- # local -ga x722 00:15:36.739 14:20:17 -- nvmf/common.sh@298 -- # mlx=() 00:15:36.739 14:20:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:36.739 14:20:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.739 14:20:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.739 14:20:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.739 14:20:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.739 14:20:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.739 14:20:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.739 14:20:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.739 14:20:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.739 14:20:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.739 14:20:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.739 14:20:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.739 14:20:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:36.739 14:20:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:36.739 14:20:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:36.739 14:20:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:36.739 14:20:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:36.739 14:20:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:36.739 14:20:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.739 14:20:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:15:36.739 Found 0000:08:00.0 (0x8086 - 0x159b) 00:15:36.739 14:20:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.739 14:20:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.739 14:20:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.739 14:20:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.739 14:20:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.739 14:20:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.739 14:20:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:15:36.739 Found 0000:08:00.1 (0x8086 - 0x159b) 00:15:36.739 14:20:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.739 14:20:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.739 14:20:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.739 14:20:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.739 14:20:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.739 14:20:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:36.739 14:20:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:36.739 14:20:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:36.739 14:20:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.739 14:20:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.739 14:20:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:36.739 14:20:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.739 14:20:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:15:36.740 Found net devices under 0000:08:00.0: cvl_0_0 00:15:36.740 14:20:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.740 14:20:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.740 14:20:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.740 14:20:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:36.740 14:20:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.740 14:20:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:15:36.740 Found net devices under 0000:08:00.1: cvl_0_1 00:15:36.740 14:20:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.740 14:20:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:36.740 14:20:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:36.740 14:20:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:36.740 14:20:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:36.740 14:20:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:36.740 14:20:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.740 14:20:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.740 14:20:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.740 14:20:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:36.740 14:20:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.740 14:20:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.740 14:20:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:36.740 14:20:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.740 14:20:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.740 14:20:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:36.740 14:20:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:36.740 14:20:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.740 14:20:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.740 14:20:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.740 14:20:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.740 14:20:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:36.740 14:20:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.740 14:20:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.740 14:20:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.740 14:20:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:36.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:15:36.740 00:15:36.740 --- 10.0.0.2 ping statistics --- 00:15:36.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.740 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:15:36.740 14:20:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:15:36.740 00:15:36.740 --- 10.0.0.1 ping statistics --- 00:15:36.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.740 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:15:36.740 14:20:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.740 14:20:17 -- nvmf/common.sh@411 -- # return 0 00:15:36.740 14:20:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:36.740 14:20:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.740 14:20:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:36.740 14:20:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:36.740 14:20:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.740 14:20:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:36.740 14:20:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:36.740 14:20:17 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:36.740 14:20:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:36.740 14:20:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:36.740 14:20:17 -- common/autotest_common.sh@10 -- # set +x 00:15:36.740 14:20:17 -- nvmf/common.sh@470 -- # nvmfpid=3163762 00:15:36.740 14:20:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:36.740 14:20:17 -- nvmf/common.sh@471 -- # waitforlisten 3163762 00:15:36.740 14:20:17 -- common/autotest_common.sh@817 -- # '[' -z 3163762 ']' 00:15:36.740 14:20:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.740 14:20:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:36.740 14:20:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.740 14:20:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:36.740 14:20:17 -- common/autotest_common.sh@10 -- # set +x 00:15:36.740 [2024-04-26 14:20:18.036434] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:15:36.740 [2024-04-26 14:20:18.036517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.740 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.740 [2024-04-26 14:20:18.101475] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.740 [2024-04-26 14:20:18.218287] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.740 [2024-04-26 14:20:18.218346] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.740 [2024-04-26 14:20:18.218363] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.740 [2024-04-26 14:20:18.218377] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.740 [2024-04-26 14:20:18.218389] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.740 [2024-04-26 14:20:18.218446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.740 [2024-04-26 14:20:18.218498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.740 [2024-04-26 14:20:18.218552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.740 [2024-04-26 14:20:18.218555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.740 14:20:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:36.740 14:20:18 -- common/autotest_common.sh@850 -- # return 0 00:15:36.740 14:20:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:36.740 14:20:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:36.740 14:20:18 -- common/autotest_common.sh@10 -- # set +x 00:15:36.740 14:20:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.740 14:20:18 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:15:36.740 14:20:18 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:15:36.740 14:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.740 14:20:18 -- common/autotest_common.sh@10 -- # set +x 00:15:36.999 14:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.999 14:20:18 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:15:36.999 14:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.999 14:20:18 -- common/autotest_common.sh@10 -- # set +x 00:15:36.999 14:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.999 14:20:18 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:15:36.999 14:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.999 14:20:18 -- common/autotest_common.sh@10 -- # set +x 00:15:36.999 [2024-04-26 14:20:18.418049] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.999 14:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.999 14:20:18 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:36.999 14:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.999 14:20:18 -- common/autotest_common.sh@10 -- # set +x 00:15:36.999 Malloc1 00:15:36.999 14:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.999 14:20:18 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:36.999 14:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.999 14:20:18 -- common/autotest_common.sh@10 -- # set +x 00:15:36.999 14:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.999 14:20:18 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:36.999 14:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.999 14:20:18 -- common/autotest_common.sh@10 -- # set +x 00:15:36.999 14:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.999 14:20:18 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.999 14:20:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.999 14:20:18 -- common/autotest_common.sh@10 -- # set +x 00:15:36.999 [2024-04-26 14:20:18.466823] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.999 14:20:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.999 14:20:18 -- target/perf_adq.sh@73 -- # perfpid=3163794 00:15:36.999 14:20:18 -- target/perf_adq.sh@74 -- # sleep 2 00:15:36.999 14:20:18 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:36.999 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.533 14:20:20 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:15:39.533 14:20:20 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:15:39.533 14:20:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:39.533 14:20:20 -- target/perf_adq.sh@76 -- # wc -l 00:15:39.533 14:20:20 -- common/autotest_common.sh@10 -- # set +x 00:15:39.533 14:20:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:39.533 14:20:20 -- target/perf_adq.sh@76 -- # count=4 00:15:39.533 14:20:20 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:15:39.533 14:20:20 -- target/perf_adq.sh@81 -- # wait 3163794 00:15:47.650 Initializing NVMe Controllers 00:15:47.650 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:47.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:15:47.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:15:47.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:15:47.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:15:47.650 Initialization complete. Launching workers. 00:15:47.650 ======================================================== 00:15:47.650 Latency(us) 00:15:47.650 Device Information : IOPS MiB/s Average min max 00:15:47.650 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9566.80 37.37 6690.10 3104.55 9070.89 00:15:47.650 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9774.50 38.18 6548.73 3277.18 8030.00 00:15:47.650 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9670.40 37.77 6618.00 5697.12 8277.46 00:15:47.650 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9672.10 37.78 6618.84 3564.68 9069.59 00:15:47.650 ======================================================== 00:15:47.650 Total : 38683.79 151.11 6618.54 3104.55 9070.89 00:15:47.650 00:15:47.650 14:20:28 -- target/perf_adq.sh@82 -- # nvmftestfini 00:15:47.650 14:20:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:47.650 14:20:28 -- nvmf/common.sh@117 -- # sync 00:15:47.650 14:20:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:47.650 14:20:28 -- nvmf/common.sh@120 -- # set +e 00:15:47.650 14:20:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:47.650 14:20:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:47.650 rmmod nvme_tcp 00:15:47.650 rmmod nvme_fabrics 00:15:47.650 rmmod nvme_keyring 00:15:47.650 14:20:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:47.650 14:20:28 -- nvmf/common.sh@124 -- # set -e 00:15:47.650 14:20:28 -- nvmf/common.sh@125 -- # return 0 00:15:47.650 14:20:28 -- nvmf/common.sh@478 -- # '[' -n 3163762 ']' 00:15:47.650 14:20:28 -- nvmf/common.sh@479 -- # killprocess 3163762 00:15:47.650 14:20:28 -- common/autotest_common.sh@936 -- # '[' -z 3163762 ']' 00:15:47.650 14:20:28 -- common/autotest_common.sh@940 -- # kill -0 3163762 00:15:47.650 14:20:28 -- common/autotest_common.sh@941 -- # uname 00:15:47.650 14:20:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:47.650 14:20:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3163762 00:15:47.650 14:20:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:47.650 14:20:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:47.650 14:20:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3163762' 00:15:47.650 killing process with pid 3163762 00:15:47.650 14:20:28 -- common/autotest_common.sh@955 -- # kill 3163762 00:15:47.650 14:20:28 -- common/autotest_common.sh@960 -- # wait 3163762 00:15:47.650 14:20:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:47.650 14:20:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:47.650 14:20:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:47.650 14:20:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:47.650 14:20:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:47.650 14:20:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.650 14:20:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.650 14:20:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.554 14:20:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:49.554 14:20:30 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:15:49.554 14:20:30 -- target/perf_adq.sh@52 -- # rmmod ice 00:15:50.124 14:20:31 -- target/perf_adq.sh@53 -- # modprobe ice 00:15:51.570 14:20:32 -- target/perf_adq.sh@54 -- # sleep 5 00:15:56.865 14:20:37 -- target/perf_adq.sh@87 -- # nvmftestinit 00:15:56.865 14:20:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:56.865 14:20:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.865 14:20:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:56.865 14:20:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:56.865 14:20:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:56.865 14:20:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.865 14:20:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.865 14:20:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.865 14:20:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:56.865 14:20:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:56.865 14:20:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:56.865 14:20:37 -- common/autotest_common.sh@10 -- # set +x 00:15:56.865 14:20:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:56.865 14:20:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:56.865 14:20:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:56.865 14:20:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:56.865 14:20:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:56.865 14:20:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:56.865 14:20:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:56.865 14:20:37 -- nvmf/common.sh@295 -- # net_devs=() 00:15:56.865 14:20:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:56.865 14:20:37 -- nvmf/common.sh@296 -- # e810=() 00:15:56.865 14:20:37 -- nvmf/common.sh@296 -- # local -ga e810 00:15:56.865 14:20:37 -- nvmf/common.sh@297 -- # x722=() 00:15:56.865 14:20:37 -- nvmf/common.sh@297 -- # local -ga x722 00:15:56.865 14:20:37 -- nvmf/common.sh@298 -- # mlx=() 00:15:56.865 14:20:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:56.865 14:20:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:56.865 14:20:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:56.865 14:20:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:56.865 14:20:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:56.865 14:20:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:56.865 14:20:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:56.865 14:20:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:56.865 14:20:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:56.865 14:20:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:56.865 14:20:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:56.865 14:20:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:56.865 14:20:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:56.865 14:20:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:56.865 14:20:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:56.865 14:20:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:56.865 14:20:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:56.865 14:20:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:56.865 14:20:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.865 14:20:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:15:56.865 Found 0000:08:00.0 (0x8086 - 0x159b) 00:15:56.866 14:20:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.866 14:20:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.866 14:20:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.866 14:20:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.866 14:20:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.866 14:20:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.866 14:20:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:15:56.866 Found 0000:08:00.1 (0x8086 - 0x159b) 00:15:56.866 14:20:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.866 14:20:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.866 14:20:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.866 14:20:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.866 14:20:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.866 14:20:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:56.866 14:20:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:56.866 14:20:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:56.866 14:20:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.866 14:20:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.866 14:20:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:56.866 14:20:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.866 14:20:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:15:56.866 Found net devices under 0000:08:00.0: cvl_0_0 00:15:56.866 14:20:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.866 14:20:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.866 14:20:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.866 14:20:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:56.866 14:20:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.866 14:20:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:15:56.866 Found net devices under 0000:08:00.1: cvl_0_1 00:15:56.866 14:20:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.866 14:20:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:56.866 14:20:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:56.866 14:20:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:56.866 14:20:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:56.866 14:20:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:56.866 14:20:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.866 14:20:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.866 14:20:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:56.866 14:20:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:56.866 14:20:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:56.866 14:20:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:56.866 14:20:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:56.866 14:20:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:56.866 14:20:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.866 14:20:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:56.866 14:20:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:56.866 14:20:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:56.866 14:20:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:56.866 14:20:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:56.866 14:20:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:56.866 14:20:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:56.866 14:20:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:56.866 14:20:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:56.866 14:20:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:56.866 14:20:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:56.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:15:56.866 00:15:56.866 --- 10.0.0.2 ping statistics --- 00:15:56.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.866 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:15:56.866 14:20:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:56.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:15:56.866 00:15:56.866 --- 10.0.0.1 ping statistics --- 00:15:56.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.866 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:15:56.866 14:20:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.866 14:20:38 -- nvmf/common.sh@411 -- # return 0 00:15:56.866 14:20:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:56.866 14:20:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.866 14:20:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:56.866 14:20:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:56.866 14:20:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.866 14:20:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:56.866 14:20:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:56.866 14:20:38 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:15:56.866 14:20:38 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:15:56.866 14:20:38 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:15:56.866 14:20:38 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:15:56.866 net.core.busy_poll = 1 00:15:56.866 14:20:38 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:15:56.866 net.core.busy_read = 1 00:15:56.866 14:20:38 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:15:56.866 14:20:38 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:15:56.866 14:20:38 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:15:56.866 14:20:38 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:15:56.866 14:20:38 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:15:56.866 14:20:38 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:56.866 14:20:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:56.866 14:20:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:56.866 14:20:38 -- common/autotest_common.sh@10 -- # set +x 00:15:56.866 14:20:38 -- nvmf/common.sh@470 -- # nvmfpid=3165797 00:15:56.866 14:20:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:56.866 14:20:38 -- nvmf/common.sh@471 -- # waitforlisten 3165797 00:15:56.866 14:20:38 -- common/autotest_common.sh@817 -- # '[' -z 3165797 ']' 00:15:56.866 14:20:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.866 14:20:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:56.866 14:20:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.866 14:20:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:56.866 14:20:38 -- common/autotest_common.sh@10 -- # set +x 00:15:56.866 [2024-04-26 14:20:38.235741] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:15:56.866 [2024-04-26 14:20:38.235852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.866 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.866 [2024-04-26 14:20:38.307401] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:56.866 [2024-04-26 14:20:38.426454] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.866 [2024-04-26 14:20:38.426516] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.866 [2024-04-26 14:20:38.426532] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.866 [2024-04-26 14:20:38.426545] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.866 [2024-04-26 14:20:38.426557] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.866 [2024-04-26 14:20:38.426618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.866 [2024-04-26 14:20:38.426670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.866 [2024-04-26 14:20:38.426721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.866 [2024-04-26 14:20:38.426726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.125 14:20:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:57.125 14:20:38 -- common/autotest_common.sh@850 -- # return 0 00:15:57.125 14:20:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:57.125 14:20:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:57.125 14:20:38 -- common/autotest_common.sh@10 -- # set +x 00:15:57.125 14:20:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.125 14:20:38 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:15:57.125 14:20:38 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:15:57.125 14:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.125 14:20:38 -- common/autotest_common.sh@10 -- # set +x 00:15:57.125 14:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.125 14:20:38 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:15:57.125 14:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.125 14:20:38 -- common/autotest_common.sh@10 -- # set +x 00:15:57.125 14:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.125 14:20:38 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:15:57.125 14:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.125 14:20:38 -- common/autotest_common.sh@10 -- # set +x 00:15:57.125 [2024-04-26 14:20:38.624317] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.125 14:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.125 14:20:38 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:57.125 14:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.125 14:20:38 -- common/autotest_common.sh@10 -- # set +x 00:15:57.125 Malloc1 00:15:57.125 14:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.125 14:20:38 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:57.125 14:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.125 14:20:38 -- common/autotest_common.sh@10 -- # set +x 00:15:57.125 14:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.125 14:20:38 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:57.125 14:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.125 14:20:38 -- common/autotest_common.sh@10 -- # set +x 00:15:57.125 14:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.125 14:20:38 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.125 14:20:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.125 14:20:38 -- common/autotest_common.sh@10 -- # set +x 00:15:57.125 [2024-04-26 14:20:38.673811] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.125 14:20:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.125 14:20:38 -- target/perf_adq.sh@94 -- # perfpid=3165861 00:15:57.125 14:20:38 -- target/perf_adq.sh@95 -- # sleep 2 00:15:57.125 14:20:38 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:57.384 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.287 14:20:40 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:15:59.287 14:20:40 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:15:59.287 14:20:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:59.287 14:20:40 -- target/perf_adq.sh@97 -- # wc -l 00:15:59.287 14:20:40 -- common/autotest_common.sh@10 -- # set +x 00:15:59.287 14:20:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:59.287 14:20:40 -- target/perf_adq.sh@97 -- # count=2 00:15:59.287 14:20:40 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:15:59.287 14:20:40 -- target/perf_adq.sh@103 -- # wait 3165861 00:16:07.407 Initializing NVMe Controllers 00:16:07.408 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:07.408 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:16:07.408 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:16:07.408 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:16:07.408 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:16:07.408 Initialization complete. Launching workers. 00:16:07.408 ======================================================== 00:16:07.408 Latency(us) 00:16:07.408 Device Information : IOPS MiB/s Average min max 00:16:07.408 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6662.70 26.03 9612.32 1491.51 53226.29 00:16:07.408 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6680.00 26.09 9614.73 1831.60 55517.12 00:16:07.408 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5454.20 21.31 11742.54 2212.02 55351.20 00:16:07.408 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5705.20 22.29 11225.83 2148.69 55875.38 00:16:07.408 ======================================================== 00:16:07.408 Total : 24502.10 95.71 10462.87 1491.51 55875.38 00:16:07.408 00:16:07.408 14:20:48 -- target/perf_adq.sh@104 -- # nvmftestfini 00:16:07.408 14:20:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:07.408 14:20:48 -- nvmf/common.sh@117 -- # sync 00:16:07.408 14:20:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:07.408 14:20:48 -- nvmf/common.sh@120 -- # set +e 00:16:07.408 14:20:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:07.408 14:20:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:07.408 rmmod nvme_tcp 00:16:07.666 rmmod nvme_fabrics 00:16:07.666 rmmod nvme_keyring 00:16:07.666 14:20:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:07.666 14:20:49 -- nvmf/common.sh@124 -- # set -e 00:16:07.666 14:20:49 -- nvmf/common.sh@125 -- # return 0 00:16:07.666 14:20:49 -- nvmf/common.sh@478 -- # '[' -n 3165797 ']' 00:16:07.666 14:20:49 -- nvmf/common.sh@479 -- # killprocess 3165797 00:16:07.666 14:20:49 -- common/autotest_common.sh@936 -- # '[' -z 3165797 ']' 00:16:07.666 14:20:49 -- common/autotest_common.sh@940 -- # kill -0 3165797 00:16:07.666 14:20:49 -- common/autotest_common.sh@941 -- # uname 00:16:07.666 14:20:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:07.666 14:20:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3165797 00:16:07.666 14:20:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:07.666 14:20:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:07.666 14:20:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3165797' 00:16:07.666 killing process with pid 3165797 00:16:07.666 14:20:49 -- common/autotest_common.sh@955 -- # kill 3165797 00:16:07.666 14:20:49 -- common/autotest_common.sh@960 -- # wait 3165797 00:16:07.926 14:20:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:07.926 14:20:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:07.926 14:20:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:07.926 14:20:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:07.926 14:20:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:07.926 14:20:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.926 14:20:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.926 14:20:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.219 14:20:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:11.219 14:20:52 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:16:11.219 00:16:11.219 real 0m43.101s 00:16:11.219 user 2m37.316s 00:16:11.219 sys 0m10.012s 00:16:11.219 14:20:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:11.219 14:20:52 -- common/autotest_common.sh@10 -- # set +x 00:16:11.219 ************************************ 00:16:11.219 END TEST nvmf_perf_adq 00:16:11.219 ************************************ 00:16:11.219 14:20:52 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:16:11.219 14:20:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:11.219 14:20:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:11.219 14:20:52 -- common/autotest_common.sh@10 -- # set +x 00:16:11.219 ************************************ 00:16:11.219 START TEST nvmf_shutdown 00:16:11.219 ************************************ 00:16:11.219 14:20:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:16:11.219 * Looking for test storage... 00:16:11.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.219 14:20:52 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:11.219 14:20:52 -- nvmf/common.sh@7 -- # uname -s 00:16:11.219 14:20:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.219 14:20:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.219 14:20:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.219 14:20:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.219 14:20:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.219 14:20:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.219 14:20:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.219 14:20:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.219 14:20:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.219 14:20:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.219 14:20:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:11.219 14:20:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:16:11.219 14:20:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.219 14:20:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.219 14:20:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:11.219 14:20:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.219 14:20:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:11.219 14:20:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.219 14:20:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.219 14:20:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.219 14:20:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.219 14:20:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.219 14:20:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.219 14:20:52 -- paths/export.sh@5 -- # export PATH 00:16:11.219 14:20:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.219 14:20:52 -- nvmf/common.sh@47 -- # : 0 00:16:11.219 14:20:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:11.219 14:20:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:11.219 14:20:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.219 14:20:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.219 14:20:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.219 14:20:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:11.219 14:20:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:11.219 14:20:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:11.219 14:20:52 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:11.219 14:20:52 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:11.219 14:20:52 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:16:11.219 14:20:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:11.219 14:20:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:11.219 14:20:52 -- common/autotest_common.sh@10 -- # set +x 00:16:11.219 ************************************ 00:16:11.219 START TEST nvmf_shutdown_tc1 00:16:11.219 ************************************ 00:16:11.219 14:20:52 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:16:11.219 14:20:52 -- target/shutdown.sh@74 -- # starttarget 00:16:11.219 14:20:52 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:11.219 14:20:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:11.219 14:20:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.219 14:20:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:11.219 14:20:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:11.219 14:20:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:11.219 14:20:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.219 14:20:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.219 14:20:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.219 14:20:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:11.219 14:20:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:11.219 14:20:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:11.219 14:20:52 -- common/autotest_common.sh@10 -- # set +x 00:16:13.126 14:20:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:13.126 14:20:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:13.126 14:20:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:13.126 14:20:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:13.126 14:20:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:13.126 14:20:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:13.126 14:20:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:13.126 14:20:54 -- nvmf/common.sh@295 -- # net_devs=() 00:16:13.126 14:20:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:13.126 14:20:54 -- nvmf/common.sh@296 -- # e810=() 00:16:13.126 14:20:54 -- nvmf/common.sh@296 -- # local -ga e810 00:16:13.126 14:20:54 -- nvmf/common.sh@297 -- # x722=() 00:16:13.126 14:20:54 -- nvmf/common.sh@297 -- # local -ga x722 00:16:13.126 14:20:54 -- nvmf/common.sh@298 -- # mlx=() 00:16:13.126 14:20:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:13.126 14:20:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:13.126 14:20:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:13.126 14:20:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:13.126 14:20:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:13.126 14:20:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:13.126 14:20:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:13.126 14:20:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:13.126 14:20:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:13.126 14:20:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:13.126 14:20:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:13.126 14:20:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:13.126 14:20:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:13.126 14:20:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:13.126 14:20:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:13.126 14:20:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:13.126 14:20:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:13.126 14:20:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:13.126 14:20:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:13.126 14:20:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:16:13.126 Found 0000:08:00.0 (0x8086 - 0x159b) 00:16:13.127 14:20:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:13.127 14:20:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:13.127 14:20:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:13.127 14:20:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:13.127 14:20:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:13.127 14:20:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:13.127 14:20:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:16:13.127 Found 0000:08:00.1 (0x8086 - 0x159b) 00:16:13.127 14:20:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:13.127 14:20:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:13.127 14:20:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:13.127 14:20:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:13.127 14:20:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:13.127 14:20:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:13.127 14:20:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:13.127 14:20:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:13.127 14:20:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:13.127 14:20:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:13.127 14:20:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:13.127 14:20:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:13.127 14:20:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:16:13.127 Found net devices under 0000:08:00.0: cvl_0_0 00:16:13.127 14:20:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:13.127 14:20:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:13.127 14:20:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:13.127 14:20:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:13.127 14:20:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:13.127 14:20:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:16:13.127 Found net devices under 0000:08:00.1: cvl_0_1 00:16:13.127 14:20:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:13.127 14:20:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:13.127 14:20:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:13.127 14:20:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:13.127 14:20:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:13.127 14:20:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:13.127 14:20:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:13.127 14:20:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:13.127 14:20:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:13.127 14:20:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:13.127 14:20:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:13.127 14:20:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:13.127 14:20:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:13.127 14:20:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:13.127 14:20:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:13.127 14:20:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:13.127 14:20:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:13.127 14:20:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:13.127 14:20:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:13.127 14:20:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:13.127 14:20:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:13.127 14:20:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:13.127 14:20:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:13.127 14:20:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:13.127 14:20:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:13.127 14:20:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:13.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:16:13.127 00:16:13.127 --- 10.0.0.2 ping statistics --- 00:16:13.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.127 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:16:13.127 14:20:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:13.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:16:13.127 00:16:13.127 --- 10.0.0.1 ping statistics --- 00:16:13.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.127 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:16:13.127 14:20:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.127 14:20:54 -- nvmf/common.sh@411 -- # return 0 00:16:13.127 14:20:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:13.127 14:20:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.127 14:20:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:13.127 14:20:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:13.127 14:20:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.127 14:20:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:13.127 14:20:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:13.127 14:20:54 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:13.127 14:20:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:13.127 14:20:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:13.127 14:20:54 -- common/autotest_common.sh@10 -- # set +x 00:16:13.127 14:20:54 -- nvmf/common.sh@470 -- # nvmfpid=3168451 00:16:13.127 14:20:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:13.127 14:20:54 -- nvmf/common.sh@471 -- # waitforlisten 3168451 00:16:13.127 14:20:54 -- common/autotest_common.sh@817 -- # '[' -z 3168451 ']' 00:16:13.127 14:20:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.127 14:20:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:13.127 14:20:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.127 14:20:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:13.127 14:20:54 -- common/autotest_common.sh@10 -- # set +x 00:16:13.127 [2024-04-26 14:20:54.426718] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:16:13.127 [2024-04-26 14:20:54.426804] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.127 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.127 [2024-04-26 14:20:54.490693] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:13.127 [2024-04-26 14:20:54.605991] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.127 [2024-04-26 14:20:54.606047] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.127 [2024-04-26 14:20:54.606063] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:13.127 [2024-04-26 14:20:54.606076] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:13.127 [2024-04-26 14:20:54.606087] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.127 [2024-04-26 14:20:54.606173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.127 [2024-04-26 14:20:54.606227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:13.127 [2024-04-26 14:20:54.606276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:13.127 [2024-04-26 14:20:54.606279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.386 14:20:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:13.386 14:20:54 -- common/autotest_common.sh@850 -- # return 0 00:16:13.386 14:20:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:13.386 14:20:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:13.386 14:20:54 -- common/autotest_common.sh@10 -- # set +x 00:16:13.386 14:20:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:13.386 14:20:54 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:13.386 14:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.386 14:20:54 -- common/autotest_common.sh@10 -- # set +x 00:16:13.386 [2024-04-26 14:20:54.752408] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:13.386 14:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.386 14:20:54 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:13.386 14:20:54 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:13.386 14:20:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:13.386 14:20:54 -- common/autotest_common.sh@10 -- # set +x 00:16:13.386 14:20:54 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:13.386 14:20:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:13.386 14:20:54 -- target/shutdown.sh@28 -- # cat 00:16:13.386 14:20:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:13.386 14:20:54 -- target/shutdown.sh@28 -- # cat 00:16:13.386 14:20:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:13.386 14:20:54 -- target/shutdown.sh@28 -- # cat 00:16:13.386 14:20:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:13.386 14:20:54 -- target/shutdown.sh@28 -- # cat 00:16:13.386 14:20:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:13.386 14:20:54 -- target/shutdown.sh@28 -- # cat 00:16:13.386 14:20:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:13.386 14:20:54 -- target/shutdown.sh@28 -- # cat 00:16:13.386 14:20:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:13.386 14:20:54 -- target/shutdown.sh@28 -- # cat 00:16:13.386 14:20:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:13.386 14:20:54 -- target/shutdown.sh@28 -- # cat 00:16:13.386 14:20:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:13.386 14:20:54 -- target/shutdown.sh@28 -- # cat 00:16:13.386 14:20:54 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:13.386 14:20:54 -- target/shutdown.sh@28 -- # cat 00:16:13.386 14:20:54 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:13.386 14:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:13.386 14:20:54 -- common/autotest_common.sh@10 -- # set +x 00:16:13.386 Malloc1 00:16:13.386 [2024-04-26 14:20:54.834789] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:13.386 Malloc2 00:16:13.386 Malloc3 00:16:13.386 Malloc4 00:16:13.645 Malloc5 00:16:13.645 Malloc6 00:16:13.645 Malloc7 00:16:13.645 Malloc8 00:16:13.645 Malloc9 00:16:13.903 Malloc10 00:16:13.903 14:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:13.903 14:20:55 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:13.903 14:20:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:13.903 14:20:55 -- common/autotest_common.sh@10 -- # set +x 00:16:13.903 14:20:55 -- target/shutdown.sh@78 -- # perfpid=3168593 00:16:13.904 14:20:55 -- target/shutdown.sh@79 -- # waitforlisten 3168593 /var/tmp/bdevperf.sock 00:16:13.904 14:20:55 -- common/autotest_common.sh@817 -- # '[' -z 3168593 ']' 00:16:13.904 14:20:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:13.904 14:20:55 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:16:13.904 14:20:55 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:13.904 14:20:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:13.904 14:20:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:13.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:13.904 14:20:55 -- nvmf/common.sh@521 -- # config=() 00:16:13.904 14:20:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:13.904 14:20:55 -- nvmf/common.sh@521 -- # local subsystem config 00:16:13.904 14:20:55 -- common/autotest_common.sh@10 -- # set +x 00:16:13.904 14:20:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.904 14:20:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.904 { 00:16:13.904 "params": { 00:16:13.904 "name": "Nvme$subsystem", 00:16:13.904 "trtype": "$TEST_TRANSPORT", 00:16:13.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.904 "adrfam": "ipv4", 00:16:13.904 "trsvcid": "$NVMF_PORT", 00:16:13.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.904 "hdgst": ${hdgst:-false}, 00:16:13.904 "ddgst": ${ddgst:-false} 00:16:13.904 }, 00:16:13.904 "method": "bdev_nvme_attach_controller" 00:16:13.904 } 00:16:13.904 EOF 00:16:13.904 )") 00:16:13.904 14:20:55 -- nvmf/common.sh@543 -- # cat 00:16:13.904 14:20:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.904 14:20:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.904 { 00:16:13.904 "params": { 00:16:13.904 "name": "Nvme$subsystem", 00:16:13.904 "trtype": "$TEST_TRANSPORT", 00:16:13.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.904 "adrfam": "ipv4", 00:16:13.904 "trsvcid": "$NVMF_PORT", 00:16:13.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.904 "hdgst": ${hdgst:-false}, 00:16:13.904 "ddgst": ${ddgst:-false} 00:16:13.904 }, 00:16:13.904 "method": "bdev_nvme_attach_controller" 00:16:13.904 } 00:16:13.904 EOF 00:16:13.904 )") 00:16:13.904 14:20:55 -- nvmf/common.sh@543 -- # cat 00:16:13.904 14:20:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.904 14:20:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.904 { 00:16:13.904 "params": { 00:16:13.904 "name": "Nvme$subsystem", 00:16:13.904 "trtype": "$TEST_TRANSPORT", 00:16:13.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.904 "adrfam": "ipv4", 00:16:13.904 "trsvcid": "$NVMF_PORT", 00:16:13.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.904 "hdgst": ${hdgst:-false}, 00:16:13.904 "ddgst": ${ddgst:-false} 00:16:13.904 }, 00:16:13.904 "method": "bdev_nvme_attach_controller" 00:16:13.904 } 00:16:13.904 EOF 00:16:13.904 )") 00:16:13.904 14:20:55 -- nvmf/common.sh@543 -- # cat 00:16:13.904 14:20:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.904 14:20:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.904 { 00:16:13.904 "params": { 00:16:13.904 "name": "Nvme$subsystem", 00:16:13.904 "trtype": "$TEST_TRANSPORT", 00:16:13.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.904 "adrfam": "ipv4", 00:16:13.904 "trsvcid": "$NVMF_PORT", 00:16:13.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.904 "hdgst": ${hdgst:-false}, 00:16:13.904 "ddgst": ${ddgst:-false} 00:16:13.904 }, 00:16:13.904 "method": "bdev_nvme_attach_controller" 00:16:13.904 } 00:16:13.904 EOF 00:16:13.904 )") 00:16:13.904 14:20:55 -- nvmf/common.sh@543 -- # cat 00:16:13.904 14:20:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.904 14:20:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.904 { 00:16:13.904 "params": { 00:16:13.904 "name": "Nvme$subsystem", 00:16:13.904 "trtype": "$TEST_TRANSPORT", 00:16:13.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.904 "adrfam": "ipv4", 00:16:13.904 "trsvcid": "$NVMF_PORT", 00:16:13.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.904 "hdgst": ${hdgst:-false}, 00:16:13.904 "ddgst": ${ddgst:-false} 00:16:13.904 }, 00:16:13.904 "method": "bdev_nvme_attach_controller" 00:16:13.904 } 00:16:13.904 EOF 00:16:13.904 )") 00:16:13.904 14:20:55 -- nvmf/common.sh@543 -- # cat 00:16:13.904 14:20:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.904 14:20:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.904 { 00:16:13.904 "params": { 00:16:13.904 "name": "Nvme$subsystem", 00:16:13.904 "trtype": "$TEST_TRANSPORT", 00:16:13.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.904 "adrfam": "ipv4", 00:16:13.904 "trsvcid": "$NVMF_PORT", 00:16:13.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.904 "hdgst": ${hdgst:-false}, 00:16:13.904 "ddgst": ${ddgst:-false} 00:16:13.904 }, 00:16:13.904 "method": "bdev_nvme_attach_controller" 00:16:13.904 } 00:16:13.904 EOF 00:16:13.904 )") 00:16:13.904 14:20:55 -- nvmf/common.sh@543 -- # cat 00:16:13.904 14:20:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.904 14:20:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.904 { 00:16:13.904 "params": { 00:16:13.904 "name": "Nvme$subsystem", 00:16:13.904 "trtype": "$TEST_TRANSPORT", 00:16:13.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.904 "adrfam": "ipv4", 00:16:13.904 "trsvcid": "$NVMF_PORT", 00:16:13.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.904 "hdgst": ${hdgst:-false}, 00:16:13.904 "ddgst": ${ddgst:-false} 00:16:13.904 }, 00:16:13.904 "method": "bdev_nvme_attach_controller" 00:16:13.904 } 00:16:13.904 EOF 00:16:13.904 )") 00:16:13.904 14:20:55 -- nvmf/common.sh@543 -- # cat 00:16:13.904 14:20:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.904 14:20:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.904 { 00:16:13.904 "params": { 00:16:13.904 "name": "Nvme$subsystem", 00:16:13.904 "trtype": "$TEST_TRANSPORT", 00:16:13.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.905 "adrfam": "ipv4", 00:16:13.905 "trsvcid": "$NVMF_PORT", 00:16:13.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.905 "hdgst": ${hdgst:-false}, 00:16:13.905 "ddgst": ${ddgst:-false} 00:16:13.905 }, 00:16:13.905 "method": "bdev_nvme_attach_controller" 00:16:13.905 } 00:16:13.905 EOF 00:16:13.905 )") 00:16:13.905 14:20:55 -- nvmf/common.sh@543 -- # cat 00:16:13.905 14:20:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.905 14:20:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.905 { 00:16:13.905 "params": { 00:16:13.905 "name": "Nvme$subsystem", 00:16:13.905 "trtype": "$TEST_TRANSPORT", 00:16:13.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.905 "adrfam": "ipv4", 00:16:13.905 "trsvcid": "$NVMF_PORT", 00:16:13.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.905 "hdgst": ${hdgst:-false}, 00:16:13.905 "ddgst": ${ddgst:-false} 00:16:13.905 }, 00:16:13.905 "method": "bdev_nvme_attach_controller" 00:16:13.905 } 00:16:13.905 EOF 00:16:13.905 )") 00:16:13.905 14:20:55 -- nvmf/common.sh@543 -- # cat 00:16:13.905 14:20:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:13.905 14:20:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:13.905 { 00:16:13.905 "params": { 00:16:13.905 "name": "Nvme$subsystem", 00:16:13.905 "trtype": "$TEST_TRANSPORT", 00:16:13.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:13.905 "adrfam": "ipv4", 00:16:13.905 "trsvcid": "$NVMF_PORT", 00:16:13.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:13.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:13.905 "hdgst": ${hdgst:-false}, 00:16:13.905 "ddgst": ${ddgst:-false} 00:16:13.905 }, 00:16:13.905 "method": "bdev_nvme_attach_controller" 00:16:13.905 } 00:16:13.905 EOF 00:16:13.905 )") 00:16:13.905 14:20:55 -- nvmf/common.sh@543 -- # cat 00:16:13.905 14:20:55 -- nvmf/common.sh@545 -- # jq . 00:16:13.905 14:20:55 -- nvmf/common.sh@546 -- # IFS=, 00:16:13.905 14:20:55 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:13.905 "params": { 00:16:13.905 "name": "Nvme1", 00:16:13.905 "trtype": "tcp", 00:16:13.905 "traddr": "10.0.0.2", 00:16:13.905 "adrfam": "ipv4", 00:16:13.905 "trsvcid": "4420", 00:16:13.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.905 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:13.905 "hdgst": false, 00:16:13.905 "ddgst": false 00:16:13.905 }, 00:16:13.905 "method": "bdev_nvme_attach_controller" 00:16:13.905 },{ 00:16:13.905 "params": { 00:16:13.905 "name": "Nvme2", 00:16:13.905 "trtype": "tcp", 00:16:13.905 "traddr": "10.0.0.2", 00:16:13.905 "adrfam": "ipv4", 00:16:13.905 "trsvcid": "4420", 00:16:13.905 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:13.905 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:13.905 "hdgst": false, 00:16:13.905 "ddgst": false 00:16:13.905 }, 00:16:13.905 "method": "bdev_nvme_attach_controller" 00:16:13.905 },{ 00:16:13.905 "params": { 00:16:13.905 "name": "Nvme3", 00:16:13.905 "trtype": "tcp", 00:16:13.905 "traddr": "10.0.0.2", 00:16:13.905 "adrfam": "ipv4", 00:16:13.905 "trsvcid": "4420", 00:16:13.905 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:13.905 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:13.905 "hdgst": false, 00:16:13.905 "ddgst": false 00:16:13.905 }, 00:16:13.905 "method": "bdev_nvme_attach_controller" 00:16:13.905 },{ 00:16:13.905 "params": { 00:16:13.905 "name": "Nvme4", 00:16:13.905 "trtype": "tcp", 00:16:13.905 "traddr": "10.0.0.2", 00:16:13.905 "adrfam": "ipv4", 00:16:13.905 "trsvcid": "4420", 00:16:13.905 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:13.905 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:13.905 "hdgst": false, 00:16:13.905 "ddgst": false 00:16:13.905 }, 00:16:13.905 "method": "bdev_nvme_attach_controller" 00:16:13.905 },{ 00:16:13.905 "params": { 00:16:13.905 "name": "Nvme5", 00:16:13.905 "trtype": "tcp", 00:16:13.905 "traddr": "10.0.0.2", 00:16:13.905 "adrfam": "ipv4", 00:16:13.905 "trsvcid": "4420", 00:16:13.905 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:13.905 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:13.905 "hdgst": false, 00:16:13.905 "ddgst": false 00:16:13.905 }, 00:16:13.905 "method": "bdev_nvme_attach_controller" 00:16:13.905 },{ 00:16:13.905 "params": { 00:16:13.905 "name": "Nvme6", 00:16:13.905 "trtype": "tcp", 00:16:13.905 "traddr": "10.0.0.2", 00:16:13.905 "adrfam": "ipv4", 00:16:13.905 "trsvcid": "4420", 00:16:13.905 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:13.905 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:13.905 "hdgst": false, 00:16:13.905 "ddgst": false 00:16:13.905 }, 00:16:13.905 "method": "bdev_nvme_attach_controller" 00:16:13.905 },{ 00:16:13.905 "params": { 00:16:13.905 "name": "Nvme7", 00:16:13.905 "trtype": "tcp", 00:16:13.905 "traddr": "10.0.0.2", 00:16:13.905 "adrfam": "ipv4", 00:16:13.905 "trsvcid": "4420", 00:16:13.905 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:13.905 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:13.905 "hdgst": false, 00:16:13.905 "ddgst": false 00:16:13.905 }, 00:16:13.905 "method": "bdev_nvme_attach_controller" 00:16:13.905 },{ 00:16:13.905 "params": { 00:16:13.905 "name": "Nvme8", 00:16:13.905 "trtype": "tcp", 00:16:13.905 "traddr": "10.0.0.2", 00:16:13.905 "adrfam": "ipv4", 00:16:13.905 "trsvcid": "4420", 00:16:13.905 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:13.905 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:13.905 "hdgst": false, 00:16:13.905 "ddgst": false 00:16:13.905 }, 00:16:13.905 "method": "bdev_nvme_attach_controller" 00:16:13.905 },{ 00:16:13.905 "params": { 00:16:13.905 "name": "Nvme9", 00:16:13.905 "trtype": "tcp", 00:16:13.905 "traddr": "10.0.0.2", 00:16:13.905 "adrfam": "ipv4", 00:16:13.905 "trsvcid": "4420", 00:16:13.905 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:13.905 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:13.905 "hdgst": false, 00:16:13.905 "ddgst": false 00:16:13.905 }, 00:16:13.905 "method": "bdev_nvme_attach_controller" 00:16:13.905 },{ 00:16:13.905 "params": { 00:16:13.905 "name": "Nvme10", 00:16:13.905 "trtype": "tcp", 00:16:13.905 "traddr": "10.0.0.2", 00:16:13.905 "adrfam": "ipv4", 00:16:13.905 "trsvcid": "4420", 00:16:13.905 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:13.905 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:13.905 "hdgst": false, 00:16:13.905 "ddgst": false 00:16:13.905 }, 00:16:13.905 "method": "bdev_nvme_attach_controller" 00:16:13.905 }' 00:16:13.905 [2024-04-26 14:20:55.327219] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:16:13.905 [2024-04-26 14:20:55.327310] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:13.905 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.905 [2024-04-26 14:20:55.389554] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.164 [2024-04-26 14:20:55.504747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.063 14:20:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:16.063 14:20:57 -- common/autotest_common.sh@850 -- # return 0 00:16:16.063 14:20:57 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:16.063 14:20:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:16.063 14:20:57 -- common/autotest_common.sh@10 -- # set +x 00:16:16.063 14:20:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:16.063 14:20:57 -- target/shutdown.sh@83 -- # kill -9 3168593 00:16:16.063 14:20:57 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:16:16.063 14:20:57 -- target/shutdown.sh@87 -- # sleep 1 00:16:16.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3168593 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:16:16.995 14:20:58 -- target/shutdown.sh@88 -- # kill -0 3168451 00:16:16.995 14:20:58 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:16.995 14:20:58 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:16.995 14:20:58 -- nvmf/common.sh@521 -- # config=() 00:16:16.995 14:20:58 -- nvmf/common.sh@521 -- # local subsystem config 00:16:16.995 14:20:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:16.995 14:20:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:16.995 { 00:16:16.995 "params": { 00:16:16.995 "name": "Nvme$subsystem", 00:16:16.995 "trtype": "$TEST_TRANSPORT", 00:16:16.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:16.995 "adrfam": "ipv4", 00:16:16.995 "trsvcid": "$NVMF_PORT", 00:16:16.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:16.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:16.995 "hdgst": ${hdgst:-false}, 00:16:16.995 "ddgst": ${ddgst:-false} 00:16:16.995 }, 00:16:16.996 "method": "bdev_nvme_attach_controller" 00:16:16.996 } 00:16:16.996 EOF 00:16:16.996 )") 00:16:16.996 14:20:58 -- nvmf/common.sh@543 -- # cat 00:16:16.996 14:20:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:16.996 14:20:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:16.996 { 00:16:16.996 "params": { 00:16:16.996 "name": "Nvme$subsystem", 00:16:16.996 "trtype": "$TEST_TRANSPORT", 00:16:16.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:16.996 "adrfam": "ipv4", 00:16:16.996 "trsvcid": "$NVMF_PORT", 00:16:16.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:16.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:16.996 "hdgst": ${hdgst:-false}, 00:16:16.996 "ddgst": ${ddgst:-false} 00:16:16.996 }, 00:16:16.996 "method": "bdev_nvme_attach_controller" 00:16:16.996 } 00:16:16.996 EOF 00:16:16.996 )") 00:16:16.996 14:20:58 -- nvmf/common.sh@543 -- # cat 00:16:16.996 14:20:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:16.996 14:20:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:16.996 { 00:16:16.996 "params": { 00:16:16.996 "name": "Nvme$subsystem", 00:16:16.996 "trtype": "$TEST_TRANSPORT", 00:16:16.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:16.996 "adrfam": "ipv4", 00:16:16.996 "trsvcid": "$NVMF_PORT", 00:16:16.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:16.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:16.996 "hdgst": ${hdgst:-false}, 00:16:16.996 "ddgst": ${ddgst:-false} 00:16:16.996 }, 00:16:16.996 "method": "bdev_nvme_attach_controller" 00:16:16.996 } 00:16:16.996 EOF 00:16:16.996 )") 00:16:16.996 14:20:58 -- nvmf/common.sh@543 -- # cat 00:16:16.996 14:20:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:16.996 14:20:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:16.996 { 00:16:16.996 "params": { 00:16:16.996 "name": "Nvme$subsystem", 00:16:16.996 "trtype": "$TEST_TRANSPORT", 00:16:16.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:16.996 "adrfam": "ipv4", 00:16:16.996 "trsvcid": "$NVMF_PORT", 00:16:16.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:16.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:16.996 "hdgst": ${hdgst:-false}, 00:16:16.996 "ddgst": ${ddgst:-false} 00:16:16.996 }, 00:16:16.996 "method": "bdev_nvme_attach_controller" 00:16:16.996 } 00:16:16.996 EOF 00:16:16.996 )") 00:16:16.996 14:20:58 -- nvmf/common.sh@543 -- # cat 00:16:16.996 14:20:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:16.996 14:20:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:16.996 { 00:16:16.996 "params": { 00:16:16.996 "name": "Nvme$subsystem", 00:16:16.996 "trtype": "$TEST_TRANSPORT", 00:16:16.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:16.996 "adrfam": "ipv4", 00:16:16.996 "trsvcid": "$NVMF_PORT", 00:16:16.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:16.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:16.996 "hdgst": ${hdgst:-false}, 00:16:16.996 "ddgst": ${ddgst:-false} 00:16:16.996 }, 00:16:16.996 "method": "bdev_nvme_attach_controller" 00:16:16.996 } 00:16:16.996 EOF 00:16:16.996 )") 00:16:16.996 14:20:58 -- nvmf/common.sh@543 -- # cat 00:16:16.996 14:20:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:16.996 14:20:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:16.996 { 00:16:16.996 "params": { 00:16:16.996 "name": "Nvme$subsystem", 00:16:16.996 "trtype": "$TEST_TRANSPORT", 00:16:16.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:16.996 "adrfam": "ipv4", 00:16:16.996 "trsvcid": "$NVMF_PORT", 00:16:16.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:16.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:16.996 "hdgst": ${hdgst:-false}, 00:16:16.996 "ddgst": ${ddgst:-false} 00:16:16.996 }, 00:16:16.996 "method": "bdev_nvme_attach_controller" 00:16:16.996 } 00:16:16.996 EOF 00:16:16.996 )") 00:16:16.996 14:20:58 -- nvmf/common.sh@543 -- # cat 00:16:16.996 14:20:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:16.996 14:20:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:16.996 { 00:16:16.996 "params": { 00:16:16.996 "name": "Nvme$subsystem", 00:16:16.996 "trtype": "$TEST_TRANSPORT", 00:16:16.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:16.996 "adrfam": "ipv4", 00:16:16.996 "trsvcid": "$NVMF_PORT", 00:16:16.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:16.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:16.996 "hdgst": ${hdgst:-false}, 00:16:16.996 "ddgst": ${ddgst:-false} 00:16:16.996 }, 00:16:16.996 "method": "bdev_nvme_attach_controller" 00:16:16.996 } 00:16:16.996 EOF 00:16:16.996 )") 00:16:16.996 14:20:58 -- nvmf/common.sh@543 -- # cat 00:16:16.996 14:20:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:16.996 14:20:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:16.996 { 00:16:16.996 "params": { 00:16:16.996 "name": "Nvme$subsystem", 00:16:16.996 "trtype": "$TEST_TRANSPORT", 00:16:16.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:16.996 "adrfam": "ipv4", 00:16:16.996 "trsvcid": "$NVMF_PORT", 00:16:16.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:16.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:16.996 "hdgst": ${hdgst:-false}, 00:16:16.996 "ddgst": ${ddgst:-false} 00:16:16.996 }, 00:16:16.996 "method": "bdev_nvme_attach_controller" 00:16:16.996 } 00:16:16.996 EOF 00:16:16.996 )") 00:16:16.996 14:20:58 -- nvmf/common.sh@543 -- # cat 00:16:16.996 14:20:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:16.996 14:20:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:16.996 { 00:16:16.996 "params": { 00:16:16.996 "name": "Nvme$subsystem", 00:16:16.996 "trtype": "$TEST_TRANSPORT", 00:16:16.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:16.996 "adrfam": "ipv4", 00:16:16.996 "trsvcid": "$NVMF_PORT", 00:16:16.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:16.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:16.996 "hdgst": ${hdgst:-false}, 00:16:16.996 "ddgst": ${ddgst:-false} 00:16:16.996 }, 00:16:16.996 "method": "bdev_nvme_attach_controller" 00:16:16.996 } 00:16:16.996 EOF 00:16:16.996 )") 00:16:16.996 14:20:58 -- nvmf/common.sh@543 -- # cat 00:16:16.996 14:20:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:16.996 14:20:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:16.996 { 00:16:16.996 "params": { 00:16:16.996 "name": "Nvme$subsystem", 00:16:16.996 "trtype": "$TEST_TRANSPORT", 00:16:16.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:16.996 "adrfam": "ipv4", 00:16:16.996 "trsvcid": "$NVMF_PORT", 00:16:16.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:16.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:16.996 "hdgst": ${hdgst:-false}, 00:16:16.996 "ddgst": ${ddgst:-false} 00:16:16.996 }, 00:16:16.996 "method": "bdev_nvme_attach_controller" 00:16:16.996 } 00:16:16.996 EOF 00:16:16.996 )") 00:16:16.996 14:20:58 -- nvmf/common.sh@543 -- # cat 00:16:16.996 14:20:58 -- nvmf/common.sh@545 -- # jq . 00:16:16.996 14:20:58 -- nvmf/common.sh@546 -- # IFS=, 00:16:16.996 14:20:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:16.996 "params": { 00:16:16.996 "name": "Nvme1", 00:16:16.996 "trtype": "tcp", 00:16:16.996 "traddr": "10.0.0.2", 00:16:16.996 "adrfam": "ipv4", 00:16:16.996 "trsvcid": "4420", 00:16:16.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:16.996 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:16.996 "hdgst": false, 00:16:16.996 "ddgst": false 00:16:16.996 }, 00:16:16.996 "method": "bdev_nvme_attach_controller" 00:16:16.996 },{ 00:16:16.996 "params": { 00:16:16.996 "name": "Nvme2", 00:16:16.996 "trtype": "tcp", 00:16:16.996 "traddr": "10.0.0.2", 00:16:16.996 "adrfam": "ipv4", 00:16:16.996 "trsvcid": "4420", 00:16:16.996 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:16.996 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:16.996 "hdgst": false, 00:16:16.996 "ddgst": false 00:16:16.996 }, 00:16:16.996 "method": "bdev_nvme_attach_controller" 00:16:16.996 },{ 00:16:16.996 "params": { 00:16:16.996 "name": "Nvme3", 00:16:16.996 "trtype": "tcp", 00:16:16.996 "traddr": "10.0.0.2", 00:16:16.996 "adrfam": "ipv4", 00:16:16.996 "trsvcid": "4420", 00:16:16.996 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:16.996 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:16.996 "hdgst": false, 00:16:16.996 "ddgst": false 00:16:16.996 }, 00:16:16.996 "method": "bdev_nvme_attach_controller" 00:16:16.996 },{ 00:16:16.996 "params": { 00:16:16.996 "name": "Nvme4", 00:16:16.996 "trtype": "tcp", 00:16:16.996 "traddr": "10.0.0.2", 00:16:16.996 "adrfam": "ipv4", 00:16:16.996 "trsvcid": "4420", 00:16:16.996 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:16.996 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:16.996 "hdgst": false, 00:16:16.996 "ddgst": false 00:16:16.996 }, 00:16:16.996 "method": "bdev_nvme_attach_controller" 00:16:16.996 },{ 00:16:16.996 "params": { 00:16:16.996 "name": "Nvme5", 00:16:16.996 "trtype": "tcp", 00:16:16.996 "traddr": "10.0.0.2", 00:16:16.996 "adrfam": "ipv4", 00:16:16.996 "trsvcid": "4420", 00:16:16.996 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:16.996 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:16.996 "hdgst": false, 00:16:16.996 "ddgst": false 00:16:16.997 }, 00:16:16.997 "method": "bdev_nvme_attach_controller" 00:16:16.997 },{ 00:16:16.997 "params": { 00:16:16.997 "name": "Nvme6", 00:16:16.997 "trtype": "tcp", 00:16:16.997 "traddr": "10.0.0.2", 00:16:16.997 "adrfam": "ipv4", 00:16:16.997 "trsvcid": "4420", 00:16:16.997 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:16.997 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:16.997 "hdgst": false, 00:16:16.997 "ddgst": false 00:16:16.997 }, 00:16:16.997 "method": "bdev_nvme_attach_controller" 00:16:16.997 },{ 00:16:16.997 "params": { 00:16:16.997 "name": "Nvme7", 00:16:16.997 "trtype": "tcp", 00:16:16.997 "traddr": "10.0.0.2", 00:16:16.997 "adrfam": "ipv4", 00:16:16.997 "trsvcid": "4420", 00:16:16.997 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:16.997 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:16.997 "hdgst": false, 00:16:16.997 "ddgst": false 00:16:16.997 }, 00:16:16.997 "method": "bdev_nvme_attach_controller" 00:16:16.997 },{ 00:16:16.997 "params": { 00:16:16.997 "name": "Nvme8", 00:16:16.997 "trtype": "tcp", 00:16:16.997 "traddr": "10.0.0.2", 00:16:16.997 "adrfam": "ipv4", 00:16:16.997 "trsvcid": "4420", 00:16:16.997 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:16.997 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:16.997 "hdgst": false, 00:16:16.997 "ddgst": false 00:16:16.997 }, 00:16:16.997 "method": "bdev_nvme_attach_controller" 00:16:16.997 },{ 00:16:16.997 "params": { 00:16:16.997 "name": "Nvme9", 00:16:16.997 "trtype": "tcp", 00:16:16.997 "traddr": "10.0.0.2", 00:16:16.997 "adrfam": "ipv4", 00:16:16.997 "trsvcid": "4420", 00:16:16.997 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:16.997 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:16.997 "hdgst": false, 00:16:16.997 "ddgst": false 00:16:16.997 }, 00:16:16.997 "method": "bdev_nvme_attach_controller" 00:16:16.997 },{ 00:16:16.997 "params": { 00:16:16.997 "name": "Nvme10", 00:16:16.997 "trtype": "tcp", 00:16:16.997 "traddr": "10.0.0.2", 00:16:16.997 "adrfam": "ipv4", 00:16:16.997 "trsvcid": "4420", 00:16:16.997 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:16.997 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:16.997 "hdgst": false, 00:16:16.997 "ddgst": false 00:16:16.997 }, 00:16:16.997 "method": "bdev_nvme_attach_controller" 00:16:16.997 }' 00:16:16.997 [2024-04-26 14:20:58.400114] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:16:16.997 [2024-04-26 14:20:58.400208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3168840 ] 00:16:16.997 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.997 [2024-04-26 14:20:58.463956] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.254 [2024-04-26 14:20:58.578589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.627 Running I/O for 1 seconds... 00:16:20.001 00:16:20.001 Latency(us) 00:16:20.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.001 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.001 Verification LBA range: start 0x0 length 0x400 00:16:20.001 Nvme1n1 : 1.19 161.82 10.11 0.00 0.00 391069.39 27379.48 324670.20 00:16:20.001 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.001 Verification LBA range: start 0x0 length 0x400 00:16:20.001 Nvme2n1 : 1.18 162.94 10.18 0.00 0.00 380406.90 22330.79 310689.19 00:16:20.001 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.001 Verification LBA range: start 0x0 length 0x400 00:16:20.001 Nvme3n1 : 1.11 177.70 11.11 0.00 0.00 336820.54 14369.37 321563.31 00:16:20.001 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.001 Verification LBA range: start 0x0 length 0x400 00:16:20.001 Nvme4n1 : 1.20 212.79 13.30 0.00 0.00 280185.74 23787.14 309135.74 00:16:20.001 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.001 Verification LBA range: start 0x0 length 0x400 00:16:20.001 Nvme5n1 : 1.19 160.75 10.05 0.00 0.00 363197.38 27767.85 337097.77 00:16:20.001 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.001 Verification LBA range: start 0x0 length 0x400 00:16:20.001 Nvme6n1 : 1.20 160.21 10.01 0.00 0.00 357128.72 25243.50 369720.13 00:16:20.001 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.001 Verification LBA range: start 0x0 length 0x400 00:16:20.001 Nvme7n1 : 1.21 215.83 13.49 0.00 0.00 259188.00 3907.89 313796.08 00:16:20.001 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.001 Verification LBA range: start 0x0 length 0x400 00:16:20.001 Nvme8n1 : 1.21 210.99 13.19 0.00 0.00 260100.55 28932.93 316902.97 00:16:20.001 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.001 Verification LBA range: start 0x0 length 0x400 00:16:20.001 Nvme9n1 : 1.23 212.10 13.26 0.00 0.00 253044.39 20680.25 274959.93 00:16:20.001 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:20.001 Verification LBA range: start 0x0 length 0x400 00:16:20.001 Nvme10n1 : 1.22 209.59 13.10 0.00 0.00 250978.23 20194.80 321563.31 00:16:20.002 =================================================================================================================== 00:16:20.002 Total : 1884.71 117.79 0.00 0.00 305582.53 3907.89 369720.13 00:16:20.259 14:21:01 -- target/shutdown.sh@94 -- # stoptarget 00:16:20.259 14:21:01 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:20.259 14:21:01 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:20.259 14:21:01 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:20.259 14:21:01 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:20.259 14:21:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:20.259 14:21:01 -- nvmf/common.sh@117 -- # sync 00:16:20.259 14:21:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:20.259 14:21:01 -- nvmf/common.sh@120 -- # set +e 00:16:20.259 14:21:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:20.259 14:21:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:20.259 rmmod nvme_tcp 00:16:20.259 rmmod nvme_fabrics 00:16:20.259 rmmod nvme_keyring 00:16:20.259 14:21:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:20.259 14:21:01 -- nvmf/common.sh@124 -- # set -e 00:16:20.259 14:21:01 -- nvmf/common.sh@125 -- # return 0 00:16:20.259 14:21:01 -- nvmf/common.sh@478 -- # '[' -n 3168451 ']' 00:16:20.259 14:21:01 -- nvmf/common.sh@479 -- # killprocess 3168451 00:16:20.259 14:21:01 -- common/autotest_common.sh@936 -- # '[' -z 3168451 ']' 00:16:20.259 14:21:01 -- common/autotest_common.sh@940 -- # kill -0 3168451 00:16:20.259 14:21:01 -- common/autotest_common.sh@941 -- # uname 00:16:20.259 14:21:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:20.259 14:21:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3168451 00:16:20.259 14:21:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:20.259 14:21:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:20.259 14:21:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3168451' 00:16:20.259 killing process with pid 3168451 00:16:20.259 14:21:01 -- common/autotest_common.sh@955 -- # kill 3168451 00:16:20.259 14:21:01 -- common/autotest_common.sh@960 -- # wait 3168451 00:16:20.517 14:21:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:20.517 14:21:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:20.517 14:21:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:20.517 14:21:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:20.517 14:21:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:20.517 14:21:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.517 14:21:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.517 14:21:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.058 14:21:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:23.058 00:16:23.058 real 0m11.453s 00:16:23.058 user 0m34.457s 00:16:23.058 sys 0m2.834s 00:16:23.058 14:21:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:23.058 14:21:04 -- common/autotest_common.sh@10 -- # set +x 00:16:23.058 ************************************ 00:16:23.058 END TEST nvmf_shutdown_tc1 00:16:23.058 ************************************ 00:16:23.058 14:21:04 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:16:23.058 14:21:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:23.058 14:21:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:23.058 14:21:04 -- common/autotest_common.sh@10 -- # set +x 00:16:23.058 ************************************ 00:16:23.058 START TEST nvmf_shutdown_tc2 00:16:23.058 ************************************ 00:16:23.058 14:21:04 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:16:23.058 14:21:04 -- target/shutdown.sh@99 -- # starttarget 00:16:23.058 14:21:04 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:23.058 14:21:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:23.058 14:21:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.058 14:21:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:23.058 14:21:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:23.058 14:21:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:23.058 14:21:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.058 14:21:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.058 14:21:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.058 14:21:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:23.058 14:21:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:23.058 14:21:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:23.058 14:21:04 -- common/autotest_common.sh@10 -- # set +x 00:16:23.058 14:21:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:23.058 14:21:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:23.058 14:21:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:23.058 14:21:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:23.058 14:21:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:23.058 14:21:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:23.058 14:21:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:23.058 14:21:04 -- nvmf/common.sh@295 -- # net_devs=() 00:16:23.058 14:21:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:23.058 14:21:04 -- nvmf/common.sh@296 -- # e810=() 00:16:23.058 14:21:04 -- nvmf/common.sh@296 -- # local -ga e810 00:16:23.058 14:21:04 -- nvmf/common.sh@297 -- # x722=() 00:16:23.058 14:21:04 -- nvmf/common.sh@297 -- # local -ga x722 00:16:23.058 14:21:04 -- nvmf/common.sh@298 -- # mlx=() 00:16:23.058 14:21:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:23.058 14:21:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:23.058 14:21:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:23.058 14:21:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:23.058 14:21:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:23.058 14:21:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:23.058 14:21:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:23.058 14:21:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:23.058 14:21:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:23.058 14:21:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:23.058 14:21:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:23.058 14:21:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:23.058 14:21:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:23.058 14:21:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:23.058 14:21:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:23.058 14:21:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:23.058 14:21:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:23.058 14:21:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:23.058 14:21:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:23.058 14:21:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:16:23.058 Found 0000:08:00.0 (0x8086 - 0x159b) 00:16:23.058 14:21:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:23.058 14:21:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:23.058 14:21:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.058 14:21:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.058 14:21:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:23.058 14:21:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:23.058 14:21:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:16:23.058 Found 0000:08:00.1 (0x8086 - 0x159b) 00:16:23.058 14:21:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:23.058 14:21:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:23.058 14:21:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.058 14:21:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.058 14:21:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:23.058 14:21:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:23.058 14:21:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:23.058 14:21:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:23.058 14:21:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:23.058 14:21:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.058 14:21:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:23.058 14:21:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.058 14:21:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:16:23.058 Found net devices under 0000:08:00.0: cvl_0_0 00:16:23.058 14:21:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.058 14:21:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:23.058 14:21:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.058 14:21:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:23.058 14:21:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.058 14:21:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:16:23.058 Found net devices under 0000:08:00.1: cvl_0_1 00:16:23.058 14:21:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.058 14:21:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:23.058 14:21:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:23.058 14:21:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:23.058 14:21:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:23.058 14:21:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:23.058 14:21:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.058 14:21:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.058 14:21:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:23.058 14:21:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:23.058 14:21:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:23.058 14:21:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:23.058 14:21:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:23.058 14:21:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:23.058 14:21:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.058 14:21:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:23.058 14:21:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:23.058 14:21:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:23.058 14:21:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:23.058 14:21:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:23.058 14:21:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:23.058 14:21:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:23.058 14:21:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:23.058 14:21:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:23.058 14:21:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:23.058 14:21:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:23.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:16:23.058 00:16:23.058 --- 10.0.0.2 ping statistics --- 00:16:23.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.058 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:16:23.058 14:21:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:23.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:16:23.058 00:16:23.058 --- 10.0.0.1 ping statistics --- 00:16:23.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.058 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:16:23.058 14:21:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.058 14:21:04 -- nvmf/common.sh@411 -- # return 0 00:16:23.058 14:21:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:23.058 14:21:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.058 14:21:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:23.058 14:21:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:23.058 14:21:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.058 14:21:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:23.058 14:21:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:23.058 14:21:04 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:23.058 14:21:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:23.058 14:21:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:23.058 14:21:04 -- common/autotest_common.sh@10 -- # set +x 00:16:23.058 14:21:04 -- nvmf/common.sh@470 -- # nvmfpid=3169653 00:16:23.059 14:21:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:23.059 14:21:04 -- nvmf/common.sh@471 -- # waitforlisten 3169653 00:16:23.059 14:21:04 -- common/autotest_common.sh@817 -- # '[' -z 3169653 ']' 00:16:23.059 14:21:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.059 14:21:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:23.059 14:21:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.059 14:21:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:23.059 14:21:04 -- common/autotest_common.sh@10 -- # set +x 00:16:23.059 [2024-04-26 14:21:04.433960] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:16:23.059 [2024-04-26 14:21:04.434045] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.059 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.059 [2024-04-26 14:21:04.499174] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:23.059 [2024-04-26 14:21:04.614598] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.059 [2024-04-26 14:21:04.614662] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.059 [2024-04-26 14:21:04.614680] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.059 [2024-04-26 14:21:04.614694] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.059 [2024-04-26 14:21:04.614707] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.059 [2024-04-26 14:21:04.614792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.059 [2024-04-26 14:21:04.614854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.059 [2024-04-26 14:21:04.614911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:23.059 [2024-04-26 14:21:04.614915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.317 14:21:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:23.317 14:21:04 -- common/autotest_common.sh@850 -- # return 0 00:16:23.317 14:21:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:23.317 14:21:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:23.317 14:21:04 -- common/autotest_common.sh@10 -- # set +x 00:16:23.317 14:21:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.317 14:21:04 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:23.317 14:21:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.317 14:21:04 -- common/autotest_common.sh@10 -- # set +x 00:16:23.317 [2024-04-26 14:21:04.770314] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.317 14:21:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.317 14:21:04 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:23.317 14:21:04 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:23.317 14:21:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:23.317 14:21:04 -- common/autotest_common.sh@10 -- # set +x 00:16:23.317 14:21:04 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:23.317 14:21:04 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:23.317 14:21:04 -- target/shutdown.sh@28 -- # cat 00:16:23.317 14:21:04 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:23.317 14:21:04 -- target/shutdown.sh@28 -- # cat 00:16:23.317 14:21:04 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:23.317 14:21:04 -- target/shutdown.sh@28 -- # cat 00:16:23.317 14:21:04 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:23.317 14:21:04 -- target/shutdown.sh@28 -- # cat 00:16:23.317 14:21:04 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:23.317 14:21:04 -- target/shutdown.sh@28 -- # cat 00:16:23.317 14:21:04 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:23.317 14:21:04 -- target/shutdown.sh@28 -- # cat 00:16:23.317 14:21:04 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:23.317 14:21:04 -- target/shutdown.sh@28 -- # cat 00:16:23.317 14:21:04 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:23.317 14:21:04 -- target/shutdown.sh@28 -- # cat 00:16:23.317 14:21:04 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:23.317 14:21:04 -- target/shutdown.sh@28 -- # cat 00:16:23.317 14:21:04 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:23.317 14:21:04 -- target/shutdown.sh@28 -- # cat 00:16:23.317 14:21:04 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:23.317 14:21:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.317 14:21:04 -- common/autotest_common.sh@10 -- # set +x 00:16:23.317 Malloc1 00:16:23.317 [2024-04-26 14:21:04.860688] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.317 Malloc2 00:16:23.575 Malloc3 00:16:23.575 Malloc4 00:16:23.575 Malloc5 00:16:23.575 Malloc6 00:16:23.575 Malloc7 00:16:23.832 Malloc8 00:16:23.832 Malloc9 00:16:23.832 Malloc10 00:16:23.832 14:21:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.832 14:21:05 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:23.832 14:21:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:23.832 14:21:05 -- common/autotest_common.sh@10 -- # set +x 00:16:23.832 14:21:05 -- target/shutdown.sh@103 -- # perfpid=3169801 00:16:23.832 14:21:05 -- target/shutdown.sh@104 -- # waitforlisten 3169801 /var/tmp/bdevperf.sock 00:16:23.832 14:21:05 -- common/autotest_common.sh@817 -- # '[' -z 3169801 ']' 00:16:23.832 14:21:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:23.832 14:21:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:23.832 14:21:05 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:23.832 14:21:05 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:23.832 14:21:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:23.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:23.832 14:21:05 -- nvmf/common.sh@521 -- # config=() 00:16:23.832 14:21:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:23.832 14:21:05 -- nvmf/common.sh@521 -- # local subsystem config 00:16:23.832 14:21:05 -- common/autotest_common.sh@10 -- # set +x 00:16:23.832 14:21:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:23.832 14:21:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:23.832 { 00:16:23.832 "params": { 00:16:23.832 "name": "Nvme$subsystem", 00:16:23.832 "trtype": "$TEST_TRANSPORT", 00:16:23.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.832 "adrfam": "ipv4", 00:16:23.832 "trsvcid": "$NVMF_PORT", 00:16:23.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.832 "hdgst": ${hdgst:-false}, 00:16:23.832 "ddgst": ${ddgst:-false} 00:16:23.832 }, 00:16:23.832 "method": "bdev_nvme_attach_controller" 00:16:23.832 } 00:16:23.832 EOF 00:16:23.832 )") 00:16:23.832 14:21:05 -- nvmf/common.sh@543 -- # cat 00:16:23.832 14:21:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:23.832 14:21:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:23.832 { 00:16:23.832 "params": { 00:16:23.832 "name": "Nvme$subsystem", 00:16:23.832 "trtype": "$TEST_TRANSPORT", 00:16:23.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.832 "adrfam": "ipv4", 00:16:23.832 "trsvcid": "$NVMF_PORT", 00:16:23.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.832 "hdgst": ${hdgst:-false}, 00:16:23.832 "ddgst": ${ddgst:-false} 00:16:23.832 }, 00:16:23.832 "method": "bdev_nvme_attach_controller" 00:16:23.832 } 00:16:23.832 EOF 00:16:23.832 )") 00:16:23.832 14:21:05 -- nvmf/common.sh@543 -- # cat 00:16:23.832 14:21:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:23.832 14:21:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:23.832 { 00:16:23.832 "params": { 00:16:23.832 "name": "Nvme$subsystem", 00:16:23.832 "trtype": "$TEST_TRANSPORT", 00:16:23.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.832 "adrfam": "ipv4", 00:16:23.832 "trsvcid": "$NVMF_PORT", 00:16:23.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.832 "hdgst": ${hdgst:-false}, 00:16:23.832 "ddgst": ${ddgst:-false} 00:16:23.832 }, 00:16:23.832 "method": "bdev_nvme_attach_controller" 00:16:23.832 } 00:16:23.832 EOF 00:16:23.832 )") 00:16:23.832 14:21:05 -- nvmf/common.sh@543 -- # cat 00:16:23.832 14:21:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:23.832 14:21:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:23.832 { 00:16:23.832 "params": { 00:16:23.832 "name": "Nvme$subsystem", 00:16:23.832 "trtype": "$TEST_TRANSPORT", 00:16:23.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.832 "adrfam": "ipv4", 00:16:23.832 "trsvcid": "$NVMF_PORT", 00:16:23.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.832 "hdgst": ${hdgst:-false}, 00:16:23.832 "ddgst": ${ddgst:-false} 00:16:23.832 }, 00:16:23.832 "method": "bdev_nvme_attach_controller" 00:16:23.832 } 00:16:23.832 EOF 00:16:23.832 )") 00:16:23.832 14:21:05 -- nvmf/common.sh@543 -- # cat 00:16:23.832 14:21:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:23.832 14:21:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:23.832 { 00:16:23.832 "params": { 00:16:23.832 "name": "Nvme$subsystem", 00:16:23.832 "trtype": "$TEST_TRANSPORT", 00:16:23.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.832 "adrfam": "ipv4", 00:16:23.832 "trsvcid": "$NVMF_PORT", 00:16:23.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.832 "hdgst": ${hdgst:-false}, 00:16:23.832 "ddgst": ${ddgst:-false} 00:16:23.832 }, 00:16:23.832 "method": "bdev_nvme_attach_controller" 00:16:23.832 } 00:16:23.832 EOF 00:16:23.832 )") 00:16:23.832 14:21:05 -- nvmf/common.sh@543 -- # cat 00:16:23.832 14:21:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:23.832 14:21:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:23.832 { 00:16:23.832 "params": { 00:16:23.832 "name": "Nvme$subsystem", 00:16:23.832 "trtype": "$TEST_TRANSPORT", 00:16:23.832 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.832 "adrfam": "ipv4", 00:16:23.832 "trsvcid": "$NVMF_PORT", 00:16:23.832 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.832 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.832 "hdgst": ${hdgst:-false}, 00:16:23.832 "ddgst": ${ddgst:-false} 00:16:23.832 }, 00:16:23.832 "method": "bdev_nvme_attach_controller" 00:16:23.832 } 00:16:23.832 EOF 00:16:23.832 )") 00:16:23.832 14:21:05 -- nvmf/common.sh@543 -- # cat 00:16:23.832 14:21:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:23.832 14:21:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:23.832 { 00:16:23.832 "params": { 00:16:23.832 "name": "Nvme$subsystem", 00:16:23.832 "trtype": "$TEST_TRANSPORT", 00:16:23.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.833 "adrfam": "ipv4", 00:16:23.833 "trsvcid": "$NVMF_PORT", 00:16:23.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.833 "hdgst": ${hdgst:-false}, 00:16:23.833 "ddgst": ${ddgst:-false} 00:16:23.833 }, 00:16:23.833 "method": "bdev_nvme_attach_controller" 00:16:23.833 } 00:16:23.833 EOF 00:16:23.833 )") 00:16:23.833 14:21:05 -- nvmf/common.sh@543 -- # cat 00:16:23.833 14:21:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:23.833 14:21:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:23.833 { 00:16:23.833 "params": { 00:16:23.833 "name": "Nvme$subsystem", 00:16:23.833 "trtype": "$TEST_TRANSPORT", 00:16:23.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.833 "adrfam": "ipv4", 00:16:23.833 "trsvcid": "$NVMF_PORT", 00:16:23.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.833 "hdgst": ${hdgst:-false}, 00:16:23.833 "ddgst": ${ddgst:-false} 00:16:23.833 }, 00:16:23.833 "method": "bdev_nvme_attach_controller" 00:16:23.833 } 00:16:23.833 EOF 00:16:23.833 )") 00:16:23.833 14:21:05 -- nvmf/common.sh@543 -- # cat 00:16:23.833 14:21:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:23.833 14:21:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:23.833 { 00:16:23.833 "params": { 00:16:23.833 "name": "Nvme$subsystem", 00:16:23.833 "trtype": "$TEST_TRANSPORT", 00:16:23.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.833 "adrfam": "ipv4", 00:16:23.833 "trsvcid": "$NVMF_PORT", 00:16:23.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.833 "hdgst": ${hdgst:-false}, 00:16:23.833 "ddgst": ${ddgst:-false} 00:16:23.833 }, 00:16:23.833 "method": "bdev_nvme_attach_controller" 00:16:23.833 } 00:16:23.833 EOF 00:16:23.833 )") 00:16:23.833 14:21:05 -- nvmf/common.sh@543 -- # cat 00:16:23.833 14:21:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:23.833 14:21:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:23.833 { 00:16:23.833 "params": { 00:16:23.833 "name": "Nvme$subsystem", 00:16:23.833 "trtype": "$TEST_TRANSPORT", 00:16:23.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.833 "adrfam": "ipv4", 00:16:23.833 "trsvcid": "$NVMF_PORT", 00:16:23.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.833 "hdgst": ${hdgst:-false}, 00:16:23.833 "ddgst": ${ddgst:-false} 00:16:23.833 }, 00:16:23.833 "method": "bdev_nvme_attach_controller" 00:16:23.833 } 00:16:23.833 EOF 00:16:23.833 )") 00:16:23.833 14:21:05 -- nvmf/common.sh@543 -- # cat 00:16:23.833 14:21:05 -- nvmf/common.sh@545 -- # jq . 00:16:23.833 14:21:05 -- nvmf/common.sh@546 -- # IFS=, 00:16:23.833 14:21:05 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:23.833 "params": { 00:16:23.833 "name": "Nvme1", 00:16:23.833 "trtype": "tcp", 00:16:23.833 "traddr": "10.0.0.2", 00:16:23.833 "adrfam": "ipv4", 00:16:23.833 "trsvcid": "4420", 00:16:23.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:23.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:23.833 "hdgst": false, 00:16:23.833 "ddgst": false 00:16:23.833 }, 00:16:23.833 "method": "bdev_nvme_attach_controller" 00:16:23.833 },{ 00:16:23.833 "params": { 00:16:23.833 "name": "Nvme2", 00:16:23.833 "trtype": "tcp", 00:16:23.833 "traddr": "10.0.0.2", 00:16:23.833 "adrfam": "ipv4", 00:16:23.833 "trsvcid": "4420", 00:16:23.833 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:23.833 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:23.833 "hdgst": false, 00:16:23.833 "ddgst": false 00:16:23.833 }, 00:16:23.833 "method": "bdev_nvme_attach_controller" 00:16:23.833 },{ 00:16:23.833 "params": { 00:16:23.833 "name": "Nvme3", 00:16:23.833 "trtype": "tcp", 00:16:23.833 "traddr": "10.0.0.2", 00:16:23.833 "adrfam": "ipv4", 00:16:23.833 "trsvcid": "4420", 00:16:23.833 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:23.833 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:23.833 "hdgst": false, 00:16:23.833 "ddgst": false 00:16:23.833 }, 00:16:23.833 "method": "bdev_nvme_attach_controller" 00:16:23.833 },{ 00:16:23.833 "params": { 00:16:23.833 "name": "Nvme4", 00:16:23.833 "trtype": "tcp", 00:16:23.833 "traddr": "10.0.0.2", 00:16:23.833 "adrfam": "ipv4", 00:16:23.833 "trsvcid": "4420", 00:16:23.833 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:23.833 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:23.833 "hdgst": false, 00:16:23.833 "ddgst": false 00:16:23.833 }, 00:16:23.833 "method": "bdev_nvme_attach_controller" 00:16:23.833 },{ 00:16:23.833 "params": { 00:16:23.833 "name": "Nvme5", 00:16:23.833 "trtype": "tcp", 00:16:23.833 "traddr": "10.0.0.2", 00:16:23.833 "adrfam": "ipv4", 00:16:23.833 "trsvcid": "4420", 00:16:23.833 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:23.833 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:23.833 "hdgst": false, 00:16:23.833 "ddgst": false 00:16:23.833 }, 00:16:23.833 "method": "bdev_nvme_attach_controller" 00:16:23.833 },{ 00:16:23.833 "params": { 00:16:23.833 "name": "Nvme6", 00:16:23.833 "trtype": "tcp", 00:16:23.833 "traddr": "10.0.0.2", 00:16:23.833 "adrfam": "ipv4", 00:16:23.833 "trsvcid": "4420", 00:16:23.833 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:23.833 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:23.833 "hdgst": false, 00:16:23.833 "ddgst": false 00:16:23.833 }, 00:16:23.833 "method": "bdev_nvme_attach_controller" 00:16:23.833 },{ 00:16:23.833 "params": { 00:16:23.833 "name": "Nvme7", 00:16:23.833 "trtype": "tcp", 00:16:23.833 "traddr": "10.0.0.2", 00:16:23.833 "adrfam": "ipv4", 00:16:23.833 "trsvcid": "4420", 00:16:23.833 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:23.833 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:23.833 "hdgst": false, 00:16:23.833 "ddgst": false 00:16:23.833 }, 00:16:23.833 "method": "bdev_nvme_attach_controller" 00:16:23.833 },{ 00:16:23.833 "params": { 00:16:23.833 "name": "Nvme8", 00:16:23.833 "trtype": "tcp", 00:16:23.833 "traddr": "10.0.0.2", 00:16:23.833 "adrfam": "ipv4", 00:16:23.833 "trsvcid": "4420", 00:16:23.833 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:23.833 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:23.833 "hdgst": false, 00:16:23.833 "ddgst": false 00:16:23.833 }, 00:16:23.833 "method": "bdev_nvme_attach_controller" 00:16:23.833 },{ 00:16:23.833 "params": { 00:16:23.833 "name": "Nvme9", 00:16:23.833 "trtype": "tcp", 00:16:23.833 "traddr": "10.0.0.2", 00:16:23.833 "adrfam": "ipv4", 00:16:23.833 "trsvcid": "4420", 00:16:23.833 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:23.833 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:23.833 "hdgst": false, 00:16:23.833 "ddgst": false 00:16:23.833 }, 00:16:23.833 "method": "bdev_nvme_attach_controller" 00:16:23.833 },{ 00:16:23.833 "params": { 00:16:23.833 "name": "Nvme10", 00:16:23.833 "trtype": "tcp", 00:16:23.833 "traddr": "10.0.0.2", 00:16:23.833 "adrfam": "ipv4", 00:16:23.833 "trsvcid": "4420", 00:16:23.833 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:23.833 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:23.833 "hdgst": false, 00:16:23.833 "ddgst": false 00:16:23.833 }, 00:16:23.833 "method": "bdev_nvme_attach_controller" 00:16:23.833 }' 00:16:23.833 [2024-04-26 14:21:05.361858] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:16:23.833 [2024-04-26 14:21:05.361936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3169801 ] 00:16:23.833 EAL: No free 2048 kB hugepages reported on node 1 00:16:24.091 [2024-04-26 14:21:05.424126] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.091 [2024-04-26 14:21:05.539873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.989 Running I/O for 10 seconds... 00:16:25.989 14:21:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:25.989 14:21:07 -- common/autotest_common.sh@850 -- # return 0 00:16:25.989 14:21:07 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:25.989 14:21:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.989 14:21:07 -- common/autotest_common.sh@10 -- # set +x 00:16:25.989 14:21:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:25.989 14:21:07 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:16:25.989 14:21:07 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:25.989 14:21:07 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:16:25.989 14:21:07 -- target/shutdown.sh@57 -- # local ret=1 00:16:25.989 14:21:07 -- target/shutdown.sh@58 -- # local i 00:16:25.989 14:21:07 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:16:25.989 14:21:07 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:25.989 14:21:07 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:25.989 14:21:07 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:25.989 14:21:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.989 14:21:07 -- common/autotest_common.sh@10 -- # set +x 00:16:25.989 14:21:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:25.990 14:21:07 -- target/shutdown.sh@60 -- # read_io_count=12 00:16:25.990 14:21:07 -- target/shutdown.sh@63 -- # '[' 12 -ge 100 ']' 00:16:25.990 14:21:07 -- target/shutdown.sh@67 -- # sleep 0.25 00:16:26.247 14:21:07 -- target/shutdown.sh@59 -- # (( i-- )) 00:16:26.247 14:21:07 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:26.247 14:21:07 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:26.247 14:21:07 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:26.247 14:21:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.247 14:21:07 -- common/autotest_common.sh@10 -- # set +x 00:16:26.247 14:21:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.247 14:21:07 -- target/shutdown.sh@60 -- # read_io_count=75 00:16:26.247 14:21:07 -- target/shutdown.sh@63 -- # '[' 75 -ge 100 ']' 00:16:26.247 14:21:07 -- target/shutdown.sh@67 -- # sleep 0.25 00:16:26.505 14:21:08 -- target/shutdown.sh@59 -- # (( i-- )) 00:16:26.505 14:21:08 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:26.505 14:21:08 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:26.505 14:21:08 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:26.505 14:21:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.505 14:21:08 -- common/autotest_common.sh@10 -- # set +x 00:16:26.764 14:21:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.764 14:21:08 -- target/shutdown.sh@60 -- # read_io_count=131 00:16:26.764 14:21:08 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:16:26.764 14:21:08 -- target/shutdown.sh@64 -- # ret=0 00:16:26.764 14:21:08 -- target/shutdown.sh@65 -- # break 00:16:26.764 14:21:08 -- target/shutdown.sh@69 -- # return 0 00:16:26.764 14:21:08 -- target/shutdown.sh@110 -- # killprocess 3169801 00:16:26.764 14:21:08 -- common/autotest_common.sh@936 -- # '[' -z 3169801 ']' 00:16:26.764 14:21:08 -- common/autotest_common.sh@940 -- # kill -0 3169801 00:16:26.764 14:21:08 -- common/autotest_common.sh@941 -- # uname 00:16:26.764 14:21:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:26.764 14:21:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3169801 00:16:26.764 14:21:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:26.764 14:21:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:26.764 14:21:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3169801' 00:16:26.764 killing process with pid 3169801 00:16:26.764 14:21:08 -- common/autotest_common.sh@955 -- # kill 3169801 00:16:26.764 14:21:08 -- common/autotest_common.sh@960 -- # wait 3169801 00:16:26.764 Received shutdown signal, test time was about 1.133151 seconds 00:16:26.764 00:16:26.764 Latency(us) 00:16:26.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.764 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:26.764 Verification LBA range: start 0x0 length 0x400 00:16:26.764 Nvme1n1 : 1.08 178.60 11.16 0.00 0.00 352963.44 24175.50 299815.06 00:16:26.764 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:26.764 Verification LBA range: start 0x0 length 0x400 00:16:26.764 Nvme2n1 : 1.10 174.53 10.91 0.00 0.00 354613.98 24855.13 323116.75 00:16:26.764 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:26.764 Verification LBA range: start 0x0 length 0x400 00:16:26.764 Nvme3n1 : 1.08 177.11 11.07 0.00 0.00 341608.42 18252.99 312242.63 00:16:26.764 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:26.764 Verification LBA range: start 0x0 length 0x400 00:16:26.764 Nvme4n1 : 1.09 190.59 11.91 0.00 0.00 303785.22 19320.98 310689.19 00:16:26.764 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:26.764 Verification LBA range: start 0x0 length 0x400 00:16:26.764 Nvme5n1 : 1.11 176.95 11.06 0.00 0.00 326315.91 3446.71 320009.86 00:16:26.764 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:26.764 Verification LBA range: start 0x0 length 0x400 00:16:26.764 Nvme6n1 : 1.13 226.10 14.13 0.00 0.00 249822.25 15049.01 312242.63 00:16:26.764 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:26.764 Verification LBA range: start 0x0 length 0x400 00:16:26.764 Nvme7n1 : 1.13 226.99 14.19 0.00 0.00 243102.91 37088.52 293601.28 00:16:26.764 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:26.764 Verification LBA range: start 0x0 length 0x400 00:16:26.764 Nvme8n1 : 1.11 172.76 10.80 0.00 0.00 312625.43 21845.33 324670.20 00:16:26.764 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:26.764 Verification LBA range: start 0x0 length 0x400 00:16:26.764 Nvme9n1 : 1.12 171.68 10.73 0.00 0.00 307104.93 25631.86 330883.98 00:16:26.764 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:26.764 Verification LBA range: start 0x0 length 0x400 00:16:26.764 Nvme10n1 : 1.12 170.94 10.68 0.00 0.00 301338.42 24272.59 338651.21 00:16:26.764 =================================================================================================================== 00:16:26.764 Total : 1866.26 116.64 0.00 0.00 305427.76 3446.71 338651.21 00:16:27.022 14:21:08 -- target/shutdown.sh@113 -- # sleep 1 00:16:27.955 14:21:09 -- target/shutdown.sh@114 -- # kill -0 3169653 00:16:27.955 14:21:09 -- target/shutdown.sh@116 -- # stoptarget 00:16:27.955 14:21:09 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:27.955 14:21:09 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:27.955 14:21:09 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:27.955 14:21:09 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:27.955 14:21:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:27.955 14:21:09 -- nvmf/common.sh@117 -- # sync 00:16:27.955 14:21:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:27.955 14:21:09 -- nvmf/common.sh@120 -- # set +e 00:16:27.955 14:21:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:27.955 14:21:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:27.955 rmmod nvme_tcp 00:16:27.955 rmmod nvme_fabrics 00:16:27.955 rmmod nvme_keyring 00:16:28.213 14:21:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:28.213 14:21:09 -- nvmf/common.sh@124 -- # set -e 00:16:28.213 14:21:09 -- nvmf/common.sh@125 -- # return 0 00:16:28.213 14:21:09 -- nvmf/common.sh@478 -- # '[' -n 3169653 ']' 00:16:28.213 14:21:09 -- nvmf/common.sh@479 -- # killprocess 3169653 00:16:28.213 14:21:09 -- common/autotest_common.sh@936 -- # '[' -z 3169653 ']' 00:16:28.213 14:21:09 -- common/autotest_common.sh@940 -- # kill -0 3169653 00:16:28.213 14:21:09 -- common/autotest_common.sh@941 -- # uname 00:16:28.213 14:21:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:28.213 14:21:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3169653 00:16:28.213 14:21:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:28.213 14:21:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:28.213 14:21:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3169653' 00:16:28.213 killing process with pid 3169653 00:16:28.213 14:21:09 -- common/autotest_common.sh@955 -- # kill 3169653 00:16:28.213 14:21:09 -- common/autotest_common.sh@960 -- # wait 3169653 00:16:28.471 14:21:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:28.471 14:21:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:28.471 14:21:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:28.471 14:21:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:28.471 14:21:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:28.471 14:21:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.471 14:21:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.471 14:21:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.005 14:21:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:31.005 00:16:31.005 real 0m7.761s 00:16:31.005 user 0m23.765s 00:16:31.005 sys 0m1.456s 00:16:31.005 14:21:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:31.005 14:21:11 -- common/autotest_common.sh@10 -- # set +x 00:16:31.005 ************************************ 00:16:31.005 END TEST nvmf_shutdown_tc2 00:16:31.005 ************************************ 00:16:31.005 14:21:12 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:16:31.005 14:21:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:31.005 14:21:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:31.005 14:21:12 -- common/autotest_common.sh@10 -- # set +x 00:16:31.005 ************************************ 00:16:31.005 START TEST nvmf_shutdown_tc3 00:16:31.005 ************************************ 00:16:31.005 14:21:12 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:16:31.005 14:21:12 -- target/shutdown.sh@121 -- # starttarget 00:16:31.005 14:21:12 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:31.005 14:21:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:31.005 14:21:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.005 14:21:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:31.005 14:21:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:31.005 14:21:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:31.005 14:21:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.005 14:21:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.005 14:21:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.005 14:21:12 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:31.005 14:21:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:31.005 14:21:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:31.005 14:21:12 -- common/autotest_common.sh@10 -- # set +x 00:16:31.005 14:21:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:31.005 14:21:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:31.005 14:21:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:31.005 14:21:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:31.005 14:21:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:31.005 14:21:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:31.005 14:21:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:31.005 14:21:12 -- nvmf/common.sh@295 -- # net_devs=() 00:16:31.005 14:21:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:31.005 14:21:12 -- nvmf/common.sh@296 -- # e810=() 00:16:31.005 14:21:12 -- nvmf/common.sh@296 -- # local -ga e810 00:16:31.005 14:21:12 -- nvmf/common.sh@297 -- # x722=() 00:16:31.005 14:21:12 -- nvmf/common.sh@297 -- # local -ga x722 00:16:31.005 14:21:12 -- nvmf/common.sh@298 -- # mlx=() 00:16:31.005 14:21:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:31.005 14:21:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:31.005 14:21:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:31.005 14:21:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:31.005 14:21:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:31.005 14:21:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:31.005 14:21:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:31.005 14:21:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:31.005 14:21:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:31.005 14:21:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:31.005 14:21:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:31.005 14:21:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:31.005 14:21:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:31.005 14:21:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:31.005 14:21:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:31.005 14:21:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:31.005 14:21:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:31.005 14:21:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:31.005 14:21:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:31.005 14:21:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:16:31.005 Found 0000:08:00.0 (0x8086 - 0x159b) 00:16:31.005 14:21:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:31.005 14:21:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:31.005 14:21:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.005 14:21:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.005 14:21:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:31.005 14:21:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:31.005 14:21:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:16:31.005 Found 0000:08:00.1 (0x8086 - 0x159b) 00:16:31.005 14:21:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:31.005 14:21:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:31.005 14:21:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.005 14:21:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.005 14:21:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:31.005 14:21:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:31.005 14:21:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:31.005 14:21:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:31.005 14:21:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:31.005 14:21:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.005 14:21:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:31.005 14:21:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.005 14:21:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:16:31.005 Found net devices under 0000:08:00.0: cvl_0_0 00:16:31.005 14:21:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.005 14:21:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:31.005 14:21:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.005 14:21:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:31.005 14:21:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.005 14:21:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:16:31.005 Found net devices under 0000:08:00.1: cvl_0_1 00:16:31.005 14:21:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.005 14:21:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:31.005 14:21:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:31.005 14:21:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:31.005 14:21:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:31.005 14:21:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:31.005 14:21:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.005 14:21:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:31.005 14:21:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:31.005 14:21:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:31.005 14:21:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:31.005 14:21:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:31.005 14:21:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:31.005 14:21:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:31.005 14:21:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.005 14:21:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:31.005 14:21:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:31.005 14:21:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:31.005 14:21:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:31.005 14:21:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:31.005 14:21:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:31.005 14:21:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:31.005 14:21:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:31.005 14:21:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:31.005 14:21:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:31.005 14:21:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:31.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:16:31.005 00:16:31.005 --- 10.0.0.2 ping statistics --- 00:16:31.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.005 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:16:31.005 14:21:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:31.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:16:31.005 00:16:31.005 --- 10.0.0.1 ping statistics --- 00:16:31.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.005 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:16:31.005 14:21:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.005 14:21:12 -- nvmf/common.sh@411 -- # return 0 00:16:31.005 14:21:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:31.005 14:21:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.005 14:21:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:31.005 14:21:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:31.005 14:21:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.005 14:21:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:31.005 14:21:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:31.005 14:21:12 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:31.005 14:21:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:31.005 14:21:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:31.005 14:21:12 -- common/autotest_common.sh@10 -- # set +x 00:16:31.005 14:21:12 -- nvmf/common.sh@470 -- # nvmfpid=3171206 00:16:31.005 14:21:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:31.005 14:21:12 -- nvmf/common.sh@471 -- # waitforlisten 3171206 00:16:31.005 14:21:12 -- common/autotest_common.sh@817 -- # '[' -z 3171206 ']' 00:16:31.005 14:21:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.006 14:21:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:31.006 14:21:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.006 14:21:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:31.006 14:21:12 -- common/autotest_common.sh@10 -- # set +x 00:16:31.006 [2024-04-26 14:21:12.319474] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:16:31.006 [2024-04-26 14:21:12.319577] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.006 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.006 [2024-04-26 14:21:12.386020] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:31.006 [2024-04-26 14:21:12.504712] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.006 [2024-04-26 14:21:12.504773] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.006 [2024-04-26 14:21:12.504789] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.006 [2024-04-26 14:21:12.504802] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.006 [2024-04-26 14:21:12.504814] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.006 [2024-04-26 14:21:12.504902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.006 [2024-04-26 14:21:12.504984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.006 [2024-04-26 14:21:12.505064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:31.006 [2024-04-26 14:21:12.505068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.264 14:21:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:31.264 14:21:12 -- common/autotest_common.sh@850 -- # return 0 00:16:31.264 14:21:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:31.264 14:21:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:31.264 14:21:12 -- common/autotest_common.sh@10 -- # set +x 00:16:31.264 14:21:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.264 14:21:12 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:31.264 14:21:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.264 14:21:12 -- common/autotest_common.sh@10 -- # set +x 00:16:31.264 [2024-04-26 14:21:12.670399] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.264 14:21:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.264 14:21:12 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:31.264 14:21:12 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:31.264 14:21:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:31.264 14:21:12 -- common/autotest_common.sh@10 -- # set +x 00:16:31.264 14:21:12 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:31.264 14:21:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:31.264 14:21:12 -- target/shutdown.sh@28 -- # cat 00:16:31.264 14:21:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:31.264 14:21:12 -- target/shutdown.sh@28 -- # cat 00:16:31.264 14:21:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:31.264 14:21:12 -- target/shutdown.sh@28 -- # cat 00:16:31.264 14:21:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:31.264 14:21:12 -- target/shutdown.sh@28 -- # cat 00:16:31.264 14:21:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:31.264 14:21:12 -- target/shutdown.sh@28 -- # cat 00:16:31.264 14:21:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:31.264 14:21:12 -- target/shutdown.sh@28 -- # cat 00:16:31.264 14:21:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:31.264 14:21:12 -- target/shutdown.sh@28 -- # cat 00:16:31.264 14:21:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:31.264 14:21:12 -- target/shutdown.sh@28 -- # cat 00:16:31.264 14:21:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:31.264 14:21:12 -- target/shutdown.sh@28 -- # cat 00:16:31.264 14:21:12 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:31.264 14:21:12 -- target/shutdown.sh@28 -- # cat 00:16:31.264 14:21:12 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:31.264 14:21:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.264 14:21:12 -- common/autotest_common.sh@10 -- # set +x 00:16:31.264 Malloc1 00:16:31.264 [2024-04-26 14:21:12.756843] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.264 Malloc2 00:16:31.264 Malloc3 00:16:31.522 Malloc4 00:16:31.522 Malloc5 00:16:31.522 Malloc6 00:16:31.522 Malloc7 00:16:31.522 Malloc8 00:16:31.781 Malloc9 00:16:31.781 Malloc10 00:16:31.781 14:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.781 14:21:13 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:31.781 14:21:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:31.781 14:21:13 -- common/autotest_common.sh@10 -- # set +x 00:16:31.781 14:21:13 -- target/shutdown.sh@125 -- # perfpid=3171356 00:16:31.781 14:21:13 -- target/shutdown.sh@126 -- # waitforlisten 3171356 /var/tmp/bdevperf.sock 00:16:31.781 14:21:13 -- common/autotest_common.sh@817 -- # '[' -z 3171356 ']' 00:16:31.781 14:21:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:31.781 14:21:13 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:31.781 14:21:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:31.781 14:21:13 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:31.781 14:21:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:31.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:31.781 14:21:13 -- nvmf/common.sh@521 -- # config=() 00:16:31.781 14:21:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:31.781 14:21:13 -- nvmf/common.sh@521 -- # local subsystem config 00:16:31.781 14:21:13 -- common/autotest_common.sh@10 -- # set +x 00:16:31.781 14:21:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:31.781 14:21:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:31.781 { 00:16:31.781 "params": { 00:16:31.781 "name": "Nvme$subsystem", 00:16:31.781 "trtype": "$TEST_TRANSPORT", 00:16:31.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:31.781 "adrfam": "ipv4", 00:16:31.781 "trsvcid": "$NVMF_PORT", 00:16:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:31.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:31.781 "hdgst": ${hdgst:-false}, 00:16:31.781 "ddgst": ${ddgst:-false} 00:16:31.781 }, 00:16:31.781 "method": "bdev_nvme_attach_controller" 00:16:31.781 } 00:16:31.781 EOF 00:16:31.781 )") 00:16:31.781 14:21:13 -- nvmf/common.sh@543 -- # cat 00:16:31.781 14:21:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:31.781 14:21:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:31.781 { 00:16:31.781 "params": { 00:16:31.781 "name": "Nvme$subsystem", 00:16:31.781 "trtype": "$TEST_TRANSPORT", 00:16:31.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:31.781 "adrfam": "ipv4", 00:16:31.781 "trsvcid": "$NVMF_PORT", 00:16:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:31.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:31.781 "hdgst": ${hdgst:-false}, 00:16:31.781 "ddgst": ${ddgst:-false} 00:16:31.781 }, 00:16:31.781 "method": "bdev_nvme_attach_controller" 00:16:31.781 } 00:16:31.781 EOF 00:16:31.781 )") 00:16:31.781 14:21:13 -- nvmf/common.sh@543 -- # cat 00:16:31.781 14:21:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:31.781 14:21:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:31.781 { 00:16:31.781 "params": { 00:16:31.781 "name": "Nvme$subsystem", 00:16:31.781 "trtype": "$TEST_TRANSPORT", 00:16:31.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:31.781 "adrfam": "ipv4", 00:16:31.781 "trsvcid": "$NVMF_PORT", 00:16:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:31.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:31.781 "hdgst": ${hdgst:-false}, 00:16:31.781 "ddgst": ${ddgst:-false} 00:16:31.781 }, 00:16:31.781 "method": "bdev_nvme_attach_controller" 00:16:31.781 } 00:16:31.781 EOF 00:16:31.781 )") 00:16:31.781 14:21:13 -- nvmf/common.sh@543 -- # cat 00:16:31.781 14:21:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:31.781 14:21:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:31.781 { 00:16:31.781 "params": { 00:16:31.781 "name": "Nvme$subsystem", 00:16:31.781 "trtype": "$TEST_TRANSPORT", 00:16:31.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:31.781 "adrfam": "ipv4", 00:16:31.781 "trsvcid": "$NVMF_PORT", 00:16:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:31.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:31.781 "hdgst": ${hdgst:-false}, 00:16:31.781 "ddgst": ${ddgst:-false} 00:16:31.781 }, 00:16:31.781 "method": "bdev_nvme_attach_controller" 00:16:31.781 } 00:16:31.781 EOF 00:16:31.781 )") 00:16:31.781 14:21:13 -- nvmf/common.sh@543 -- # cat 00:16:31.781 14:21:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:31.781 14:21:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:31.781 { 00:16:31.781 "params": { 00:16:31.781 "name": "Nvme$subsystem", 00:16:31.781 "trtype": "$TEST_TRANSPORT", 00:16:31.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:31.781 "adrfam": "ipv4", 00:16:31.781 "trsvcid": "$NVMF_PORT", 00:16:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:31.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:31.781 "hdgst": ${hdgst:-false}, 00:16:31.781 "ddgst": ${ddgst:-false} 00:16:31.781 }, 00:16:31.781 "method": "bdev_nvme_attach_controller" 00:16:31.781 } 00:16:31.781 EOF 00:16:31.781 )") 00:16:31.781 14:21:13 -- nvmf/common.sh@543 -- # cat 00:16:31.781 14:21:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:31.781 14:21:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:31.781 { 00:16:31.781 "params": { 00:16:31.781 "name": "Nvme$subsystem", 00:16:31.781 "trtype": "$TEST_TRANSPORT", 00:16:31.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:31.781 "adrfam": "ipv4", 00:16:31.781 "trsvcid": "$NVMF_PORT", 00:16:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:31.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:31.781 "hdgst": ${hdgst:-false}, 00:16:31.781 "ddgst": ${ddgst:-false} 00:16:31.781 }, 00:16:31.781 "method": "bdev_nvme_attach_controller" 00:16:31.781 } 00:16:31.781 EOF 00:16:31.781 )") 00:16:31.781 14:21:13 -- nvmf/common.sh@543 -- # cat 00:16:31.781 14:21:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:31.781 14:21:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:31.781 { 00:16:31.781 "params": { 00:16:31.781 "name": "Nvme$subsystem", 00:16:31.781 "trtype": "$TEST_TRANSPORT", 00:16:31.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:31.781 "adrfam": "ipv4", 00:16:31.781 "trsvcid": "$NVMF_PORT", 00:16:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:31.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:31.781 "hdgst": ${hdgst:-false}, 00:16:31.781 "ddgst": ${ddgst:-false} 00:16:31.781 }, 00:16:31.781 "method": "bdev_nvme_attach_controller" 00:16:31.781 } 00:16:31.781 EOF 00:16:31.781 )") 00:16:31.781 14:21:13 -- nvmf/common.sh@543 -- # cat 00:16:31.781 14:21:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:31.781 14:21:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:31.781 { 00:16:31.781 "params": { 00:16:31.781 "name": "Nvme$subsystem", 00:16:31.781 "trtype": "$TEST_TRANSPORT", 00:16:31.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:31.781 "adrfam": "ipv4", 00:16:31.781 "trsvcid": "$NVMF_PORT", 00:16:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:31.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:31.781 "hdgst": ${hdgst:-false}, 00:16:31.781 "ddgst": ${ddgst:-false} 00:16:31.781 }, 00:16:31.781 "method": "bdev_nvme_attach_controller" 00:16:31.781 } 00:16:31.781 EOF 00:16:31.781 )") 00:16:31.781 14:21:13 -- nvmf/common.sh@543 -- # cat 00:16:31.781 14:21:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:31.781 14:21:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:31.781 { 00:16:31.781 "params": { 00:16:31.781 "name": "Nvme$subsystem", 00:16:31.781 "trtype": "$TEST_TRANSPORT", 00:16:31.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:31.781 "adrfam": "ipv4", 00:16:31.781 "trsvcid": "$NVMF_PORT", 00:16:31.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:31.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:31.781 "hdgst": ${hdgst:-false}, 00:16:31.781 "ddgst": ${ddgst:-false} 00:16:31.781 }, 00:16:31.781 "method": "bdev_nvme_attach_controller" 00:16:31.781 } 00:16:31.781 EOF 00:16:31.781 )") 00:16:31.781 14:21:13 -- nvmf/common.sh@543 -- # cat 00:16:31.781 14:21:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:31.781 14:21:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:31.782 { 00:16:31.782 "params": { 00:16:31.782 "name": "Nvme$subsystem", 00:16:31.782 "trtype": "$TEST_TRANSPORT", 00:16:31.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:31.782 "adrfam": "ipv4", 00:16:31.782 "trsvcid": "$NVMF_PORT", 00:16:31.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:31.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:31.782 "hdgst": ${hdgst:-false}, 00:16:31.782 "ddgst": ${ddgst:-false} 00:16:31.782 }, 00:16:31.782 "method": "bdev_nvme_attach_controller" 00:16:31.782 } 00:16:31.782 EOF 00:16:31.782 )") 00:16:31.782 14:21:13 -- nvmf/common.sh@543 -- # cat 00:16:31.782 14:21:13 -- nvmf/common.sh@545 -- # jq . 00:16:31.782 14:21:13 -- nvmf/common.sh@546 -- # IFS=, 00:16:31.782 14:21:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:31.782 "params": { 00:16:31.782 "name": "Nvme1", 00:16:31.782 "trtype": "tcp", 00:16:31.782 "traddr": "10.0.0.2", 00:16:31.782 "adrfam": "ipv4", 00:16:31.782 "trsvcid": "4420", 00:16:31.782 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:31.782 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:31.782 "hdgst": false, 00:16:31.782 "ddgst": false 00:16:31.782 }, 00:16:31.782 "method": "bdev_nvme_attach_controller" 00:16:31.782 },{ 00:16:31.782 "params": { 00:16:31.782 "name": "Nvme2", 00:16:31.782 "trtype": "tcp", 00:16:31.782 "traddr": "10.0.0.2", 00:16:31.782 "adrfam": "ipv4", 00:16:31.782 "trsvcid": "4420", 00:16:31.782 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:31.782 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:31.782 "hdgst": false, 00:16:31.782 "ddgst": false 00:16:31.782 }, 00:16:31.782 "method": "bdev_nvme_attach_controller" 00:16:31.782 },{ 00:16:31.782 "params": { 00:16:31.782 "name": "Nvme3", 00:16:31.782 "trtype": "tcp", 00:16:31.782 "traddr": "10.0.0.2", 00:16:31.782 "adrfam": "ipv4", 00:16:31.782 "trsvcid": "4420", 00:16:31.782 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:31.782 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:31.782 "hdgst": false, 00:16:31.782 "ddgst": false 00:16:31.782 }, 00:16:31.782 "method": "bdev_nvme_attach_controller" 00:16:31.782 },{ 00:16:31.782 "params": { 00:16:31.782 "name": "Nvme4", 00:16:31.782 "trtype": "tcp", 00:16:31.782 "traddr": "10.0.0.2", 00:16:31.782 "adrfam": "ipv4", 00:16:31.782 "trsvcid": "4420", 00:16:31.782 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:31.782 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:31.782 "hdgst": false, 00:16:31.782 "ddgst": false 00:16:31.782 }, 00:16:31.782 "method": "bdev_nvme_attach_controller" 00:16:31.782 },{ 00:16:31.782 "params": { 00:16:31.782 "name": "Nvme5", 00:16:31.782 "trtype": "tcp", 00:16:31.782 "traddr": "10.0.0.2", 00:16:31.782 "adrfam": "ipv4", 00:16:31.782 "trsvcid": "4420", 00:16:31.782 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:31.782 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:31.782 "hdgst": false, 00:16:31.782 "ddgst": false 00:16:31.782 }, 00:16:31.782 "method": "bdev_nvme_attach_controller" 00:16:31.782 },{ 00:16:31.782 "params": { 00:16:31.782 "name": "Nvme6", 00:16:31.782 "trtype": "tcp", 00:16:31.782 "traddr": "10.0.0.2", 00:16:31.782 "adrfam": "ipv4", 00:16:31.782 "trsvcid": "4420", 00:16:31.782 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:31.782 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:31.782 "hdgst": false, 00:16:31.782 "ddgst": false 00:16:31.782 }, 00:16:31.782 "method": "bdev_nvme_attach_controller" 00:16:31.782 },{ 00:16:31.782 "params": { 00:16:31.782 "name": "Nvme7", 00:16:31.782 "trtype": "tcp", 00:16:31.782 "traddr": "10.0.0.2", 00:16:31.782 "adrfam": "ipv4", 00:16:31.782 "trsvcid": "4420", 00:16:31.782 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:31.782 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:31.782 "hdgst": false, 00:16:31.782 "ddgst": false 00:16:31.782 }, 00:16:31.782 "method": "bdev_nvme_attach_controller" 00:16:31.782 },{ 00:16:31.782 "params": { 00:16:31.782 "name": "Nvme8", 00:16:31.782 "trtype": "tcp", 00:16:31.782 "traddr": "10.0.0.2", 00:16:31.782 "adrfam": "ipv4", 00:16:31.782 "trsvcid": "4420", 00:16:31.782 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:31.782 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:31.782 "hdgst": false, 00:16:31.782 "ddgst": false 00:16:31.782 }, 00:16:31.782 "method": "bdev_nvme_attach_controller" 00:16:31.782 },{ 00:16:31.782 "params": { 00:16:31.782 "name": "Nvme9", 00:16:31.782 "trtype": "tcp", 00:16:31.782 "traddr": "10.0.0.2", 00:16:31.782 "adrfam": "ipv4", 00:16:31.782 "trsvcid": "4420", 00:16:31.782 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:31.782 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:31.782 "hdgst": false, 00:16:31.782 "ddgst": false 00:16:31.782 }, 00:16:31.782 "method": "bdev_nvme_attach_controller" 00:16:31.782 },{ 00:16:31.782 "params": { 00:16:31.782 "name": "Nvme10", 00:16:31.782 "trtype": "tcp", 00:16:31.782 "traddr": "10.0.0.2", 00:16:31.782 "adrfam": "ipv4", 00:16:31.782 "trsvcid": "4420", 00:16:31.782 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:31.782 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:31.782 "hdgst": false, 00:16:31.782 "ddgst": false 00:16:31.782 }, 00:16:31.782 "method": "bdev_nvme_attach_controller" 00:16:31.782 }' 00:16:31.782 [2024-04-26 14:21:13.239334] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:16:31.782 [2024-04-26 14:21:13.239425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3171356 ] 00:16:31.782 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.782 [2024-04-26 14:21:13.300984] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.040 [2024-04-26 14:21:13.415436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.940 Running I/O for 10 seconds... 00:16:33.940 14:21:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:33.940 14:21:15 -- common/autotest_common.sh@850 -- # return 0 00:16:33.940 14:21:15 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:33.940 14:21:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:33.940 14:21:15 -- common/autotest_common.sh@10 -- # set +x 00:16:33.940 14:21:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:33.940 14:21:15 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:33.940 14:21:15 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:16:33.940 14:21:15 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:33.940 14:21:15 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:16:33.940 14:21:15 -- target/shutdown.sh@57 -- # local ret=1 00:16:33.940 14:21:15 -- target/shutdown.sh@58 -- # local i 00:16:33.940 14:21:15 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:16:33.940 14:21:15 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:33.940 14:21:15 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:33.940 14:21:15 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:33.940 14:21:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:33.940 14:21:15 -- common/autotest_common.sh@10 -- # set +x 00:16:33.940 14:21:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:33.940 14:21:15 -- target/shutdown.sh@60 -- # read_io_count=3 00:16:33.940 14:21:15 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:16:33.940 14:21:15 -- target/shutdown.sh@67 -- # sleep 0.25 00:16:34.198 14:21:15 -- target/shutdown.sh@59 -- # (( i-- )) 00:16:34.198 14:21:15 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:34.198 14:21:15 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:34.198 14:21:15 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:34.198 14:21:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.198 14:21:15 -- common/autotest_common.sh@10 -- # set +x 00:16:34.198 14:21:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.198 14:21:15 -- target/shutdown.sh@60 -- # read_io_count=67 00:16:34.198 14:21:15 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:16:34.198 14:21:15 -- target/shutdown.sh@67 -- # sleep 0.25 00:16:34.471 14:21:15 -- target/shutdown.sh@59 -- # (( i-- )) 00:16:34.471 14:21:15 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:34.471 14:21:15 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:34.471 14:21:15 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:34.471 14:21:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.471 14:21:15 -- common/autotest_common.sh@10 -- # set +x 00:16:34.471 14:21:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.471 14:21:15 -- target/shutdown.sh@60 -- # read_io_count=131 00:16:34.471 14:21:15 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:16:34.471 14:21:15 -- target/shutdown.sh@64 -- # ret=0 00:16:34.471 14:21:15 -- target/shutdown.sh@65 -- # break 00:16:34.471 14:21:15 -- target/shutdown.sh@69 -- # return 0 00:16:34.471 14:21:15 -- target/shutdown.sh@135 -- # killprocess 3171206 00:16:34.471 14:21:15 -- common/autotest_common.sh@936 -- # '[' -z 3171206 ']' 00:16:34.471 14:21:15 -- common/autotest_common.sh@940 -- # kill -0 3171206 00:16:34.471 14:21:15 -- common/autotest_common.sh@941 -- # uname 00:16:34.471 14:21:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:34.471 14:21:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3171206 00:16:34.471 14:21:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:34.471 14:21:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:34.471 14:21:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3171206' 00:16:34.471 killing process with pid 3171206 00:16:34.471 14:21:15 -- common/autotest_common.sh@955 -- # kill 3171206 00:16:34.471 14:21:15 -- common/autotest_common.sh@960 -- # wait 3171206 00:16:34.471 [2024-04-26 14:21:15.964351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964451] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964482] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964508] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964522] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964563] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964577] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964638] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964654] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964682] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964710] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964737] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964752] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964793] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964807] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964833] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964930] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.964999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.965012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.965026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.965039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.965053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.471 [2024-04-26 14:21:15.965066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.965080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.965093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.965110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.965126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.965140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.965154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.965168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.965181] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.965195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.965209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.965222] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.965237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.965251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.965264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.965278] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.965292] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.965305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.965320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575920 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967241] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967344] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967451] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967496] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967586] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967618] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967641] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967722] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967736] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967896] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.967988] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.968007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.968021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.968035] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.968052] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.968067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.968081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.968095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.968113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.968127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.968141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.968159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.968173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.472 [2024-04-26 14:21:15.968187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.968205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.968220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.968234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.968251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575db0 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969715] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969850] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969889] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969930] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.969998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970024] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970082] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970342] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970356] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970411] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970470] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.970497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576240 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.971596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.971629] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.971657] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.971671] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.971685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.971699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.971713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.971727] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.971741] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.971754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.971768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.971781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.473 [2024-04-26 14:21:15.971795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.971808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.971822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.971836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.971850] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.971864] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.971877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.971890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.971904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.971930] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.971945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.971959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.971972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.971986] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972040] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972176] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972202] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972245] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972317] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972330] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972344] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972385] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972398] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972412] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.972496] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15766d0 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973317] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973331] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973385] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973398] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973411] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973425] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973470] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973483] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973523] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973563] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973589] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973629] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973650] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.474 [2024-04-26 14:21:15.973717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.973731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.973744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.973758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.973771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.973785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.973798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.973816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.973830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.973844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.973857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.973870] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.973884] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.973897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.973911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1576b60 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.974772] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.974798] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.974818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.974832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.974846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.974863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.974878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.974891] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.974906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.974923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.974937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.974951] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.974969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.974983] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.974997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975107] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975226] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975435] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975484] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975514] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975543] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.975600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577010 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.977125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.977154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.475 [2024-04-26 14:21:15.977169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-04-26 14:21:15.977229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with id:0 cdw10:00000000 cdw11:00000000 00:16:34.476 the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.476 [2024-04-26 14:21:15.977267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with [2024-04-26 14:21:15.977283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:16:34.476 id:0 cdw10:00000000 cdw11:00000000 00:16:34.476 [2024-04-26 14:21:15.977298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with [2024-04-26 14:21:15.977299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:16:34.476 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.476 [2024-04-26 14:21:15.977313] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.476 [2024-04-26 14:21:15.977327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with [2024-04-26 14:21:15.977331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:16:34.476 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.476 [2024-04-26 14:21:15.977348] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with [2024-04-26 14:21:15.977350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:16:34.476 id:0 cdw10:00000000 cdw11:00000000 00:16:34.476 [2024-04-26 14:21:15.977364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with [2024-04-26 14:21:15.977366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:16:34.476 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.476 [2024-04-26 14:21:15.977380] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255e960 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977436] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977450] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.476 [2024-04-26 14:21:15.977464] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.476 [2024-04-26 14:21:15.977477] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.476 [2024-04-26 14:21:15.977494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-04-26 14:21:15.977509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.476 the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977525] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-04-26 14:21:15.977538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with id:0 cdw10:00000000 cdw11:00000000 00:16:34.476 the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.476 [2024-04-26 14:21:15.977567] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.476 [2024-04-26 14:21:15.977581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.476 [2024-04-26 14:21:15.977596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255e7a0 is same [2024-04-26 14:21:15.977610] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with with the state(5) to be set 00:16:34.476 the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977625] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977647] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-04-26 14:21:15.977675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with id:0 cdw10:00000000 cdw11:00000000 00:16:34.476 the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-04-26 14:21:15.977692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.476 the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with [2024-04-26 14:21:15.977712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:16:34.476 id:0 cdw10:00000000 cdw11:00000000 00:16:34.476 [2024-04-26 14:21:15.977727] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with [2024-04-26 14:21:15.977729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:16:34.476 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.476 [2024-04-26 14:21:15.977742] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with [2024-04-26 14:21:15.977745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:16:34.476 id:0 cdw10:00000000 cdw11:00000000 00:16:34.476 [2024-04-26 14:21:15.977762] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with [2024-04-26 14:21:15.977763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:16:34.476 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.476 [2024-04-26 14:21:15.977779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.476 [2024-04-26 14:21:15.977793] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.476 [2024-04-26 14:21:15.977796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.977807] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.977810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2496220 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.977821] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.977835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.977848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.977861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.977861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.977875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.977882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.977889] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.977903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.977906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.977917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.977930] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.977933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.977944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.977952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.977958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.977966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.977972] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.977982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.977985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.977996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.977999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.978010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ade0 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.978018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.978033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.978046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.978059] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with [2024-04-26 14:21:15.978060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:16:34.477 id:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.978079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1577930 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.978083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.978099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.978113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.978128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.978142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.978157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.978171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.978184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24badf0 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.978232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.978252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.978268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.978282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.978297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.978311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.978326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.978340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.978354] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2062c90 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.978405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.978454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.978472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.978486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.978501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.978515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.978534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.978548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.978562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2660c70 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.978609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.978629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.978653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.978668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.978683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.978697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.978711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.978725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.978739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2607110 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.978789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.978809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.978825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.978869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.978884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.978899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.978913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.978928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.978942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7130 is same with the state(5) to be set 00:16:34.477 [2024-04-26 14:21:15.978990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.979011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.979056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.979073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.477 [2024-04-26 14:21:15.979087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.477 [2024-04-26 14:21:15.979106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.979121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:34.478 [2024-04-26 14:21:15.979135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.979149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d470 is same with the state(5) to be set 00:16:34.478 [2024-04-26 14:21:15.980468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.980497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.980526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.980543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.980561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.980576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.980594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.980609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.980626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.980651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.980669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.980684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.980701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.980716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.980733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.980748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.980765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.980780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.980797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.980812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.980829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.980849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.980866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.980881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.980898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.980913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.980930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.980945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.980961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.980976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.980993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.478 [2024-04-26 14:21:15.981712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.478 [2024-04-26 14:21:15.981728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.981745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.981760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.981777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.981792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.981808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.981823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.981840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.981855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.981872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.981887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.981903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.981919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.981935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.981951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.981968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.981983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:16:34.479 [2024-04-26 14:21:15.982685] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2493e30 was disconnected and freed. reset controller. 00:16:34.479 [2024-04-26 14:21:15.982754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.982977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.982994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.983009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.983032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.983047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.983064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.983079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.983096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.983112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.983129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.479 [2024-04-26 14:21:15.983144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.479 [2024-04-26 14:21:15.983161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.983969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.983984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.984001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.984016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.984033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.984048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.984066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.984081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.984098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.984113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.984130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.984145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.984161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.984176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.984193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.984208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.984226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.480 [2024-04-26 14:21:15.984241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.480 [2024-04-26 14:21:15.984258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.984277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.984294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.984310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.984327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.984342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.984359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.984374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.984391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.984406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.984423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.984438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.984456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.984471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.984489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.984504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.984521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.984536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.984553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.984568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.984586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.984601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.984618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.984640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.984659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.984674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.984695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.984711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.984728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.984743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.984760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.984775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.984792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.984807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.984824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.984839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.984857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.984871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.984888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25cf990 is same with the state(5) to be set 00:16:34.481 [2024-04-26 14:21:15.985367] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25cf990 was disconnected and freed. reset controller. 00:16:34.481 [2024-04-26 14:21:15.991937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.992007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.992043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.992060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.992078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.992094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.992111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.992126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.992143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.992158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.992175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.992202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.992220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.992236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.992252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.992267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.992285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.992300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.992317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.992332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.992349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.992364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.992382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.992397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.992414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.992429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.992446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.992461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.992478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.992493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.992510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.992525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.992542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.992557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.992574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.992589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.992609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.992624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.481 [2024-04-26 14:21:15.992653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.481 [2024-04-26 14:21:15.992669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.992686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.992701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.992718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.992733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.992750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.992765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.992782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.992796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.992814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.992829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.992845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.992860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.992877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.992892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.992909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.992924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.992941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.992955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.992972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.992987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.482 [2024-04-26 14:21:15.993939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.482 [2024-04-26 14:21:15.993956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.993971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.993987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994223] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24901c0 was disconnected and freed. reset controller. 00:16:34.483 [2024-04-26 14:21:15.994304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.994984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.994999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.995016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.995031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.995048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.995062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.995079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.995093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.995110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.995125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.995142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.995157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.995173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.995188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.995205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.995220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.995237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.995252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.995273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.995288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.995306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.995321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.995337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.995352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.995369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.995383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.995400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.483 [2024-04-26 14:21:15.995415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.483 [2024-04-26 14:21:15.995432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.995447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.995463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.995478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.995495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.995510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.995526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.995541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.995558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.995572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.995589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.995604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.995621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.995644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.995662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.995680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.995698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.995713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.995730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.995745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.995762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.995777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.995794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.995809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.995826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.995841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.995858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.995873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.995889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.995904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.995921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.995936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.995953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.995967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.995984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.995999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.996016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.996030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.996048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.996062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.996083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.996098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.996115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.996130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.996147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.996161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.996178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.996193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.996210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.996225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.996241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.996256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.996273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.996288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.996304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.996320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.996337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.996351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.996368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.484 [2024-04-26 14:21:15.996383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.484 [2024-04-26 14:21:15.996484] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2491670 was disconnected and freed. reset controller. 00:16:34.484 [2024-04-26 14:21:15.999189] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x255e960 (9): Bad file descriptor 00:16:34.484 [2024-04-26 14:21:15.999270] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x255e7a0 (9): Bad file descriptor 00:16:34.484 [2024-04-26 14:21:15.999301] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2496220 (9): Bad file descriptor 00:16:34.484 [2024-04-26 14:21:15.999331] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249ade0 (9): Bad file descriptor 00:16:34.484 [2024-04-26 14:21:15.999370] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24badf0 (9): Bad file descriptor 00:16:34.484 [2024-04-26 14:21:15.999402] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2062c90 (9): Bad file descriptor 00:16:34.484 [2024-04-26 14:21:15.999433] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2660c70 (9): Bad file descriptor 00:16:34.484 [2024-04-26 14:21:15.999464] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2607110 (9): Bad file descriptor 00:16:34.484 [2024-04-26 14:21:15.999490] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c7130 (9): Bad file descriptor 00:16:34.484 [2024-04-26 14:21:15.999515] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x255d470 (9): Bad file descriptor 00:16:34.484 [2024-04-26 14:21:16.002780] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:16:34.484 [2024-04-26 14:21:16.002866] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:16:34.484 [2024-04-26 14:21:16.003977] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:16:34.484 [2024-04-26 14:21:16.004013] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:16:34.484 [2024-04-26 14:21:16.004195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.484 [2024-04-26 14:21:16.004335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.484 [2024-04-26 14:21:16.004361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x255e7a0 with addr=10.0.0.2, port=4420 00:16:34.484 [2024-04-26 14:21:16.004380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255e7a0 is same with the state(5) to be set 00:16:34.484 [2024-04-26 14:21:16.004515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.484 [2024-04-26 14:21:16.004623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.485 [2024-04-26 14:21:16.004660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c7130 with addr=10.0.0.2, port=4420 00:16:34.485 [2024-04-26 14:21:16.004678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7130 is same with the state(5) to be set 00:16:34.485 [2024-04-26 14:21:16.004782] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:34.485 [2024-04-26 14:21:16.004859] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:34.485 [2024-04-26 14:21:16.004937] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:34.485 [2024-04-26 14:21:16.005020] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:34.485 [2024-04-26 14:21:16.005091] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:34.485 [2024-04-26 14:21:16.005788] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:16:34.485 [2024-04-26 14:21:16.005986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.485 [2024-04-26 14:21:16.006163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.485 [2024-04-26 14:21:16.006189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x255d470 with addr=10.0.0.2, port=4420 00:16:34.485 [2024-04-26 14:21:16.006206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d470 is same with the state(5) to be set 00:16:34.485 [2024-04-26 14:21:16.006313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.485 [2024-04-26 14:21:16.006421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.485 [2024-04-26 14:21:16.006445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2607110 with addr=10.0.0.2, port=4420 00:16:34.485 [2024-04-26 14:21:16.006462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2607110 is same with the state(5) to be set 00:16:34.485 [2024-04-26 14:21:16.006487] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x255e7a0 (9): Bad file descriptor 00:16:34.485 [2024-04-26 14:21:16.006521] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c7130 (9): Bad file descriptor 00:16:34.485 [2024-04-26 14:21:16.006644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.006671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.006706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.006722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.006741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.006756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.006773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.006789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.006806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.006821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.006838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.006853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.006870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.006884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.006901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.006916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.006933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.006948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.006965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.006980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.006997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.007012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.007028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.007043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.007065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.007081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.007098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.007113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.007130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.007144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.007161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.007176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.007193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.007207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.007224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.007239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.007256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.007271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.007288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.007303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.007320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.007334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.007351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.007366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.007383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.007399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.007417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.007431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.007448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.485 [2024-04-26 14:21:16.007471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.485 [2024-04-26 14:21:16.007489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.007504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.007521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.007536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.007553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.007568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.007585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.007600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.007617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.007638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.007658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.007675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.007693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.007708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.007725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.007739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.007756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.007771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.007788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.007803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.007820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.007834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.007852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.007866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.007887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.007902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.007919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.007933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.007950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.007965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.007982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.007996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.486 [2024-04-26 14:21:16.008750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.486 [2024-04-26 14:21:16.008766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2546fe0 is same with the state(5) to be set 00:16:34.486 [2024-04-26 14:21:16.008871] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2546fe0 was disconnected and freed. reset controller. 00:16:34.486 [2024-04-26 14:21:16.009039] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x255d470 (9): Bad file descriptor 00:16:34.487 [2024-04-26 14:21:16.009071] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2607110 (9): Bad file descriptor 00:16:34.487 [2024-04-26 14:21:16.009089] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:16:34.487 [2024-04-26 14:21:16.009104] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:16:34.487 [2024-04-26 14:21:16.009120] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:16:34.487 [2024-04-26 14:21:16.009143] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:16:34.487 [2024-04-26 14:21:16.009158] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:16:34.487 [2024-04-26 14:21:16.009172] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:16:34.487 [2024-04-26 14:21:16.010570] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:34.487 [2024-04-26 14:21:16.010605] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:34.487 [2024-04-26 14:21:16.010621] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:16:34.487 [2024-04-26 14:21:16.010666] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:16:34.487 [2024-04-26 14:21:16.010684] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:16:34.487 [2024-04-26 14:21:16.010698] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:16:34.487 [2024-04-26 14:21:16.010719] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:16:34.487 [2024-04-26 14:21:16.010734] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:16:34.487 [2024-04-26 14:21:16.010747] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:16:34.487 [2024-04-26 14:21:16.010882] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:34.487 [2024-04-26 14:21:16.010904] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:34.487 [2024-04-26 14:21:16.011127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.487 [2024-04-26 14:21:16.011250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.487 [2024-04-26 14:21:16.011277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2660c70 with addr=10.0.0.2, port=4420 00:16:34.487 [2024-04-26 14:21:16.011297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2660c70 is same with the state(5) to be set 00:16:34.487 [2024-04-26 14:21:16.011379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.011401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.011430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.011447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.011466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.011481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.011498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.011513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.011531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.011546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.011563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.011578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.011595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.011610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.011627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.011655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.011673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.011688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.011705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.011720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.011739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.011754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.011771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.011786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.011803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.011818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.011840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.011855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.011872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.011887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.011905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.011920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.011936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.011951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.011968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.011983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.012000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.012015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.012033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.012048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.012065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.012080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.012097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.012112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.012129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.012144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.012160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.012175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.012193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.012207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.012225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.012243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.012260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.012275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.012292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.012307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.012324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.012339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.487 [2024-04-26 14:21:16.012356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.487 [2024-04-26 14:21:16.012371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.012388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.012402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.012419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.012434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.012451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.012466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.012483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.012497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.012514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.012529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.012547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.012563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.012580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.012595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.012612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.012627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.012655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.012671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.012688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.012703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.012720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.012734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.012751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.012766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.012783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.012797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.012815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.012830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.012846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.012861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.012878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.012893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.012910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.012924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.012942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.012957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.012973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.012988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.013005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.013020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.013037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.013055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.013073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.013088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.013105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.013120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.013137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.013152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.013169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.013184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.013201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.013216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.013233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.013248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.013264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.013279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.013296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.013310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.013327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.013342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.013359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.013373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.013390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.013405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.013422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.013437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.013458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.013473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.013489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d7620 is same with the state(5) to be set 00:16:34.488 [2024-04-26 14:21:16.014952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.014985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.015010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.015027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.015045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.015060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.015077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.488 [2024-04-26 14:21:16.015092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.488 [2024-04-26 14:21:16.015110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.015983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.015998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.016015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.016030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.016047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.016062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.016079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.016094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.016111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.016126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.016143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.016158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.016184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.016200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.016217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.489 [2024-04-26 14:21:16.016232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.489 [2024-04-26 14:21:16.016249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.016971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.016986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.017007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.017023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.017040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.017055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.017072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.017087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.017103] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d88b0 is same with the state(5) to be set 00:16:34.490 [2024-04-26 14:21:16.018583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.018617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.018649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.018666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.018685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.018700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.018717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.018732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.018750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.018765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.018782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.018797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.018814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.018829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.018847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.018862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.018879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.018894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.018926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.018942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.018960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.018975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.018992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.019008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.019025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.490 [2024-04-26 14:21:16.019041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.490 [2024-04-26 14:21:16.019058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.019974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.019991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.020006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.020023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.020038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.020055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.020070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.020088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.020102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.020119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.020134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.020151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.020170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.020188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.020203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.020220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.020235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.020253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.020268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.020285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.020300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.020318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.020333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.491 [2024-04-26 14:21:16.020350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.491 [2024-04-26 14:21:16.020365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.020382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.020396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.020414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.020429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.020447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.020462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.020479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.020493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.020511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.020526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.020543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.020558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.020575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.020594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.020612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.020627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.020651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.020667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.020685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.020700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.020718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.020733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.020749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25446d0 is same with the state(5) to be set 00:16:34.492 [2024-04-26 14:21:16.022228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.022985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.022999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.023017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.023032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.023049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.023064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.023081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.023096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.023113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.023128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.492 [2024-04-26 14:21:16.023145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.492 [2024-04-26 14:21:16.023160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.023976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.023991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.024008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.024023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.024040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.024055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.024072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.024087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.024104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.024119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.024136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.024154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.024171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.024186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.024203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.024218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.024236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.024251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.024268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.024283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.024300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.024315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.024332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.024347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.024363] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2545b80 is same with the state(5) to be set 00:16:34.493 [2024-04-26 14:21:16.026139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.026175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.026202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.026218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.026235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.493 [2024-04-26 14:21:16.026250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.493 [2024-04-26 14:21:16.026267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.494 [2024-04-26 14:21:16.026288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.494 [2024-04-26 14:21:16.026305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.494 [2024-04-26 14:21:16.026320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.494 [2024-04-26 14:21:16.026337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.494 [2024-04-26 14:21:16.026359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.494 [2024-04-26 14:21:16.026376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.494 [2024-04-26 14:21:16.026391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.494 [2024-04-26 14:21:16.026408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.494 [2024-04-26 14:21:16.026423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.494 [2024-04-26 14:21:16.026440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.494 [2024-04-26 14:21:16.026455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.494 [2024-04-26 14:21:16.026472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.494 [2024-04-26 14:21:16.026487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.494 [2024-04-26 14:21:16.026504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.494 [2024-04-26 14:21:16.026519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.494 [2024-04-26 14:21:16.026536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.494 [2024-04-26 14:21:16.026551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.494 [2024-04-26 14:21:16.026568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.494 [2024-04-26 14:21:16.026583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.494 [2024-04-26 14:21:16.026601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.494 [2024-04-26 14:21:16.026616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.494 [2024-04-26 14:21:16.026649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.494 [2024-04-26 14:21:16.026668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.494 [2024-04-26 14:21:16.026686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.494 [2024-04-26 14:21:16.026701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.494 [2024-04-26 14:21:16.026718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.494 [2024-04-26 14:21:16.026733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.494 [2024-04-26 14:21:16.026750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.494 [2024-04-26 14:21:16.026765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.755 [2024-04-26 14:21:16.026787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.755 [2024-04-26 14:21:16.026802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.755 [2024-04-26 14:21:16.026819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.755 [2024-04-26 14:21:16.026835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.755 [2024-04-26 14:21:16.026852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.755 [2024-04-26 14:21:16.026867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.755 [2024-04-26 14:21:16.026884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.755 [2024-04-26 14:21:16.026900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.755 [2024-04-26 14:21:16.026917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.755 [2024-04-26 14:21:16.026932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.755 [2024-04-26 14:21:16.026949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.755 [2024-04-26 14:21:16.026964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.755 [2024-04-26 14:21:16.026981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.755 [2024-04-26 14:21:16.026996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.755 [2024-04-26 14:21:16.027013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.755 [2024-04-26 14:21:16.027028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.755 [2024-04-26 14:21:16.027045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.755 [2024-04-26 14:21:16.027060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.755 [2024-04-26 14:21:16.027077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.755 [2024-04-26 14:21:16.027092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.755 [2024-04-26 14:21:16.027110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.755 [2024-04-26 14:21:16.027124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.755 [2024-04-26 14:21:16.027142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.755 [2024-04-26 14:21:16.027158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.027976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.027993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.028012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.028029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.028044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.028061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.028076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.028092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.028107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.028124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.028140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.028157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.028172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.028189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.028204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.028221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.028236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.028253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.756 [2024-04-26 14:21:16.028268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.756 [2024-04-26 14:21:16.028284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492b20 is same with the state(5) to be set 00:16:34.756 [2024-04-26 14:21:16.030102] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:34.756 [2024-04-26 14:21:16.030149] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:16:34.756 [2024-04-26 14:21:16.030169] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:16:34.757 [2024-04-26 14:21:16.030190] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:16:34.757 [2024-04-26 14:21:16.030266] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2660c70 (9): Bad file descriptor 00:16:34.757 [2024-04-26 14:21:16.030357] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:34.757 [2024-04-26 14:21:16.030394] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:34.757 task offset: 19968 on job bdev=Nvme9n1 fails 00:16:34.757 00:16:34.757 Latency(us) 00:16:34.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.757 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.757 Job: Nvme1n1 ended in about 0.98 seconds with error 00:16:34.757 Verification LBA range: start 0x0 length 0x400 00:16:34.757 Nvme1n1 : 0.98 131.07 8.19 65.54 0.00 321411.35 23495.87 302921.96 00:16:34.757 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.757 Job: Nvme2n1 ended in about 0.98 seconds with error 00:16:34.757 Verification LBA range: start 0x0 length 0x400 00:16:34.757 Nvme2n1 : 0.98 130.59 8.16 65.29 0.00 314955.85 22427.88 301368.51 00:16:34.757 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.757 Job: Nvme3n1 ended in about 0.98 seconds with error 00:16:34.757 Verification LBA range: start 0x0 length 0x400 00:16:34.757 Nvme3n1 : 0.98 134.17 8.39 65.05 0.00 302275.51 22719.15 281173.71 00:16:34.757 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.757 Job: Nvme4n1 ended in about 0.99 seconds with error 00:16:34.757 Verification LBA range: start 0x0 length 0x400 00:16:34.757 Nvme4n1 : 0.99 129.63 8.10 64.82 0.00 302200.10 21845.33 301368.51 00:16:34.757 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.757 Job: Nvme5n1 ended in about 0.97 seconds with error 00:16:34.757 Verification LBA range: start 0x0 length 0x400 00:16:34.757 Nvme5n1 : 0.97 148.10 9.26 65.82 0.00 267292.12 17670.45 306028.85 00:16:34.757 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.757 Job: Nvme6n1 ended in about 0.96 seconds with error 00:16:34.757 Verification LBA range: start 0x0 length 0x400 00:16:34.757 Nvme6n1 : 0.96 132.90 8.31 66.45 0.00 278960.36 20583.16 284280.60 00:16:34.757 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.757 Job: Nvme7n1 ended in about 0.96 seconds with error 00:16:34.757 Verification LBA range: start 0x0 length 0x400 00:16:34.757 Nvme7n1 : 0.96 132.73 8.30 66.36 0.00 271788.31 21456.97 302921.96 00:16:34.757 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.757 Job: Nvme8n1 ended in about 0.99 seconds with error 00:16:34.757 Verification LBA range: start 0x0 length 0x400 00:16:34.757 Nvme8n1 : 0.99 129.12 8.07 64.56 0.00 273346.05 20777.34 279620.27 00:16:34.757 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.757 Job: Nvme9n1 ended in about 0.96 seconds with error 00:16:34.757 Verification LBA range: start 0x0 length 0x400 00:16:34.757 Nvme9n1 : 0.96 133.38 8.34 66.69 0.00 255526.68 16505.36 301368.51 00:16:34.757 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.757 Job: Nvme10n1 ended in about 0.96 seconds with error 00:16:34.757 Verification LBA range: start 0x0 length 0x400 00:16:34.757 Nvme10n1 : 0.96 133.21 8.33 66.60 0.00 248665.63 21165.70 333990.87 00:16:34.757 =================================================================================================================== 00:16:34.757 Total : 1334.90 83.43 657.19 0.00 283545.77 16505.36 333990.87 00:16:34.757 [2024-04-26 14:21:16.057880] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:34.757 [2024-04-26 14:21:16.057963] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:16:34.757 [2024-04-26 14:21:16.058285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.757 [2024-04-26 14:21:16.058416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.757 [2024-04-26 14:21:16.058443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2062c90 with addr=10.0.0.2, port=4420 00:16:34.757 [2024-04-26 14:21:16.058463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2062c90 is same with the state(5) to be set 00:16:34.757 [2024-04-26 14:21:16.058579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.757 [2024-04-26 14:21:16.058686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.757 [2024-04-26 14:21:16.058724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2496220 with addr=10.0.0.2, port=4420 00:16:34.757 [2024-04-26 14:21:16.058741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2496220 is same with the state(5) to be set 00:16:34.757 [2024-04-26 14:21:16.058888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.757 [2024-04-26 14:21:16.059023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.757 [2024-04-26 14:21:16.059048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249ade0 with addr=10.0.0.2, port=4420 00:16:34.757 [2024-04-26 14:21:16.059065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249ade0 is same with the state(5) to be set 00:16:34.757 [2024-04-26 14:21:16.059165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.757 [2024-04-26 14:21:16.059268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.757 [2024-04-26 14:21:16.059295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24badf0 with addr=10.0.0.2, port=4420 00:16:34.757 [2024-04-26 14:21:16.059311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24badf0 is same with the state(5) to be set 00:16:34.757 [2024-04-26 14:21:16.059328] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:16:34.757 [2024-04-26 14:21:16.059342] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:16:34.757 [2024-04-26 14:21:16.059359] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:16:34.757 [2024-04-26 14:21:16.061017] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:16:34.757 [2024-04-26 14:21:16.061066] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:16:34.757 [2024-04-26 14:21:16.061085] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:16:34.757 [2024-04-26 14:21:16.061102] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:16:34.757 [2024-04-26 14:21:16.061122] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:34.757 [2024-04-26 14:21:16.061360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.757 [2024-04-26 14:21:16.061468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.757 [2024-04-26 14:21:16.061495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x255e960 with addr=10.0.0.2, port=4420 00:16:34.757 [2024-04-26 14:21:16.061514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255e960 is same with the state(5) to be set 00:16:34.757 [2024-04-26 14:21:16.061540] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2062c90 (9): Bad file descriptor 00:16:34.757 [2024-04-26 14:21:16.061564] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2496220 (9): Bad file descriptor 00:16:34.757 [2024-04-26 14:21:16.061583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249ade0 (9): Bad file descriptor 00:16:34.757 [2024-04-26 14:21:16.061601] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24badf0 (9): Bad file descriptor 00:16:34.757 [2024-04-26 14:21:16.061690] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:34.757 [2024-04-26 14:21:16.061719] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:34.757 [2024-04-26 14:21:16.061744] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:34.757 [2024-04-26 14:21:16.061764] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:34.757 [2024-04-26 14:21:16.062006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.757 [2024-04-26 14:21:16.062114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.757 [2024-04-26 14:21:16.062139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24c7130 with addr=10.0.0.2, port=4420 00:16:34.757 [2024-04-26 14:21:16.062155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24c7130 is same with the state(5) to be set 00:16:34.757 [2024-04-26 14:21:16.062315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.757 [2024-04-26 14:21:16.062428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.757 [2024-04-26 14:21:16.062456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x255e7a0 with addr=10.0.0.2, port=4420 00:16:34.757 [2024-04-26 14:21:16.062473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255e7a0 is same with the state(5) to be set 00:16:34.757 [2024-04-26 14:21:16.062585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.757 [2024-04-26 14:21:16.062676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.757 [2024-04-26 14:21:16.062701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2607110 with addr=10.0.0.2, port=4420 00:16:34.757 [2024-04-26 14:21:16.062717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2607110 is same with the state(5) to be set 00:16:34.757 [2024-04-26 14:21:16.062837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.757 [2024-04-26 14:21:16.062972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.758 [2024-04-26 14:21:16.062999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x255d470 with addr=10.0.0.2, port=4420 00:16:34.758 [2024-04-26 14:21:16.063016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x255d470 is same with the state(5) to be set 00:16:34.758 [2024-04-26 14:21:16.063036] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x255e960 (9): Bad file descriptor 00:16:34.758 [2024-04-26 14:21:16.063054] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:34.758 [2024-04-26 14:21:16.063068] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:16:34.758 [2024-04-26 14:21:16.063084] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:34.758 [2024-04-26 14:21:16.063105] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:34.758 [2024-04-26 14:21:16.063119] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:16:34.758 [2024-04-26 14:21:16.063133] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:34.758 [2024-04-26 14:21:16.063151] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:16:34.758 [2024-04-26 14:21:16.063164] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:16:34.758 [2024-04-26 14:21:16.063178] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:16:34.758 [2024-04-26 14:21:16.063196] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:16:34.758 [2024-04-26 14:21:16.063211] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:16:34.758 [2024-04-26 14:21:16.063224] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:16:34.758 [2024-04-26 14:21:16.063327] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:16:34.758 [2024-04-26 14:21:16.063355] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:34.758 [2024-04-26 14:21:16.063369] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:34.758 [2024-04-26 14:21:16.063388] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:34.758 [2024-04-26 14:21:16.063401] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:34.758 [2024-04-26 14:21:16.063432] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c7130 (9): Bad file descriptor 00:16:34.758 [2024-04-26 14:21:16.063454] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x255e7a0 (9): Bad file descriptor 00:16:34.758 [2024-04-26 14:21:16.063473] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2607110 (9): Bad file descriptor 00:16:34.758 [2024-04-26 14:21:16.063491] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x255d470 (9): Bad file descriptor 00:16:34.758 [2024-04-26 14:21:16.063508] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:16:34.758 [2024-04-26 14:21:16.063521] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:16:34.758 [2024-04-26 14:21:16.063534] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:16:34.758 [2024-04-26 14:21:16.063580] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:34.758 [2024-04-26 14:21:16.063701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.758 [2024-04-26 14:21:16.063817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:34.758 [2024-04-26 14:21:16.063843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2660c70 with addr=10.0.0.2, port=4420 00:16:34.758 [2024-04-26 14:21:16.063859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2660c70 is same with the state(5) to be set 00:16:34.758 [2024-04-26 14:21:16.063875] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:16:34.758 [2024-04-26 14:21:16.063888] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:16:34.758 [2024-04-26 14:21:16.063902] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:16:34.758 [2024-04-26 14:21:16.063920] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:16:34.758 [2024-04-26 14:21:16.063934] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:16:34.758 [2024-04-26 14:21:16.063948] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:16:34.758 [2024-04-26 14:21:16.063964] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:16:34.758 [2024-04-26 14:21:16.063978] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:16:34.758 [2024-04-26 14:21:16.063992] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:16:34.758 [2024-04-26 14:21:16.064008] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:16:34.758 [2024-04-26 14:21:16.064022] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:16:34.758 [2024-04-26 14:21:16.064035] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:16:34.758 [2024-04-26 14:21:16.064081] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:34.758 [2024-04-26 14:21:16.064099] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:34.758 [2024-04-26 14:21:16.064112] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:34.758 [2024-04-26 14:21:16.064124] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:34.758 [2024-04-26 14:21:16.064142] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2660c70 (9): Bad file descriptor 00:16:34.758 [2024-04-26 14:21:16.064193] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:16:34.758 [2024-04-26 14:21:16.064212] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:16:34.758 [2024-04-26 14:21:16.064226] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:16:34.758 [2024-04-26 14:21:16.064266] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:35.019 14:21:16 -- target/shutdown.sh@136 -- # nvmfpid= 00:16:35.019 14:21:16 -- target/shutdown.sh@139 -- # sleep 1 00:16:36.021 14:21:17 -- target/shutdown.sh@142 -- # kill -9 3171356 00:16:36.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3171356) - No such process 00:16:36.021 14:21:17 -- target/shutdown.sh@142 -- # true 00:16:36.021 14:21:17 -- target/shutdown.sh@144 -- # stoptarget 00:16:36.021 14:21:17 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:36.021 14:21:17 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:36.021 14:21:17 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:36.021 14:21:17 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:36.021 14:21:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:36.021 14:21:17 -- nvmf/common.sh@117 -- # sync 00:16:36.021 14:21:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:36.021 14:21:17 -- nvmf/common.sh@120 -- # set +e 00:16:36.021 14:21:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:36.021 14:21:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:36.021 rmmod nvme_tcp 00:16:36.021 rmmod nvme_fabrics 00:16:36.021 rmmod nvme_keyring 00:16:36.021 14:21:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:36.021 14:21:17 -- nvmf/common.sh@124 -- # set -e 00:16:36.021 14:21:17 -- nvmf/common.sh@125 -- # return 0 00:16:36.021 14:21:17 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:16:36.021 14:21:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:36.021 14:21:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:36.021 14:21:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:36.022 14:21:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:36.022 14:21:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:36.022 14:21:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.022 14:21:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.022 14:21:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.952 14:21:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:37.952 00:16:37.952 real 0m7.385s 00:16:37.952 user 0m18.028s 00:16:37.952 sys 0m1.376s 00:16:37.952 14:21:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:37.952 14:21:19 -- common/autotest_common.sh@10 -- # set +x 00:16:37.952 ************************************ 00:16:37.952 END TEST nvmf_shutdown_tc3 00:16:37.952 ************************************ 00:16:38.211 14:21:19 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:16:38.211 00:16:38.211 real 0m27.067s 00:16:38.211 user 1m16.418s 00:16:38.211 sys 0m5.938s 00:16:38.211 14:21:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:38.211 14:21:19 -- common/autotest_common.sh@10 -- # set +x 00:16:38.211 ************************************ 00:16:38.211 END TEST nvmf_shutdown 00:16:38.211 ************************************ 00:16:38.211 14:21:19 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:16:38.211 14:21:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:38.211 14:21:19 -- common/autotest_common.sh@10 -- # set +x 00:16:38.211 14:21:19 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:16:38.211 14:21:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:38.211 14:21:19 -- common/autotest_common.sh@10 -- # set +x 00:16:38.211 14:21:19 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:16:38.211 14:21:19 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:38.211 14:21:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:38.211 14:21:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:38.211 14:21:19 -- common/autotest_common.sh@10 -- # set +x 00:16:38.211 ************************************ 00:16:38.211 START TEST nvmf_multicontroller 00:16:38.211 ************************************ 00:16:38.211 14:21:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:38.211 * Looking for test storage... 00:16:38.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:16:38.211 14:21:19 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:38.211 14:21:19 -- nvmf/common.sh@7 -- # uname -s 00:16:38.211 14:21:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.211 14:21:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.211 14:21:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.211 14:21:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.211 14:21:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.211 14:21:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.211 14:21:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.211 14:21:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.211 14:21:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.211 14:21:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.211 14:21:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:38.211 14:21:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:16:38.211 14:21:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.211 14:21:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.211 14:21:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:38.211 14:21:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.211 14:21:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:38.211 14:21:19 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.211 14:21:19 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.211 14:21:19 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.211 14:21:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.211 14:21:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.211 14:21:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.211 14:21:19 -- paths/export.sh@5 -- # export PATH 00:16:38.211 14:21:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.211 14:21:19 -- nvmf/common.sh@47 -- # : 0 00:16:38.211 14:21:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:38.211 14:21:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:38.211 14:21:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.211 14:21:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.211 14:21:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.211 14:21:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:38.211 14:21:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:38.211 14:21:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:38.211 14:21:19 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:38.211 14:21:19 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:38.211 14:21:19 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:16:38.211 14:21:19 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:16:38.211 14:21:19 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:38.211 14:21:19 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:16:38.211 14:21:19 -- host/multicontroller.sh@23 -- # nvmftestinit 00:16:38.211 14:21:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:38.212 14:21:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.212 14:21:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:38.212 14:21:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:38.212 14:21:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:38.212 14:21:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.212 14:21:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.212 14:21:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.212 14:21:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:38.212 14:21:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:38.212 14:21:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:38.212 14:21:19 -- common/autotest_common.sh@10 -- # set +x 00:16:40.118 14:21:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:40.118 14:21:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:40.118 14:21:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:40.119 14:21:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:40.119 14:21:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:40.119 14:21:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:40.119 14:21:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:40.119 14:21:21 -- nvmf/common.sh@295 -- # net_devs=() 00:16:40.119 14:21:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:40.119 14:21:21 -- nvmf/common.sh@296 -- # e810=() 00:16:40.119 14:21:21 -- nvmf/common.sh@296 -- # local -ga e810 00:16:40.119 14:21:21 -- nvmf/common.sh@297 -- # x722=() 00:16:40.119 14:21:21 -- nvmf/common.sh@297 -- # local -ga x722 00:16:40.119 14:21:21 -- nvmf/common.sh@298 -- # mlx=() 00:16:40.119 14:21:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:40.119 14:21:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.119 14:21:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.119 14:21:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.119 14:21:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.119 14:21:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.119 14:21:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.119 14:21:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.119 14:21:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.119 14:21:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.119 14:21:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.119 14:21:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.119 14:21:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:40.119 14:21:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:40.119 14:21:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:40.119 14:21:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:40.119 14:21:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:40.119 14:21:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:40.119 14:21:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.119 14:21:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:16:40.119 Found 0000:08:00.0 (0x8086 - 0x159b) 00:16:40.119 14:21:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.119 14:21:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.119 14:21:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.119 14:21:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.119 14:21:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.119 14:21:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.119 14:21:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:16:40.119 Found 0000:08:00.1 (0x8086 - 0x159b) 00:16:40.119 14:21:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.119 14:21:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.119 14:21:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.119 14:21:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.119 14:21:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.119 14:21:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:40.119 14:21:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:40.119 14:21:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:40.119 14:21:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.119 14:21:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.119 14:21:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:40.119 14:21:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.119 14:21:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:16:40.119 Found net devices under 0000:08:00.0: cvl_0_0 00:16:40.119 14:21:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.119 14:21:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.119 14:21:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.119 14:21:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:40.119 14:21:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.119 14:21:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:16:40.119 Found net devices under 0000:08:00.1: cvl_0_1 00:16:40.119 14:21:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.119 14:21:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:40.119 14:21:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:40.119 14:21:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:40.119 14:21:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:40.119 14:21:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:40.119 14:21:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.119 14:21:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.119 14:21:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:40.119 14:21:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:40.119 14:21:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:40.119 14:21:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:40.119 14:21:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:40.119 14:21:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:40.119 14:21:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.119 14:21:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:40.119 14:21:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:40.119 14:21:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:40.119 14:21:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:40.119 14:21:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:40.119 14:21:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:40.119 14:21:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:40.119 14:21:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:40.119 14:21:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:40.119 14:21:21 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:40.119 14:21:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:40.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:16:40.119 00:16:40.119 --- 10.0.0.2 ping statistics --- 00:16:40.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.119 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:16:40.119 14:21:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:40.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:16:40.119 00:16:40.119 --- 10.0.0.1 ping statistics --- 00:16:40.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.119 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:16:40.119 14:21:21 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.119 14:21:21 -- nvmf/common.sh@411 -- # return 0 00:16:40.119 14:21:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:40.119 14:21:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.119 14:21:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:40.119 14:21:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:40.119 14:21:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.119 14:21:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:40.119 14:21:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:40.119 14:21:21 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:16:40.119 14:21:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:40.119 14:21:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:40.119 14:21:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.119 14:21:21 -- nvmf/common.sh@470 -- # nvmfpid=3173237 00:16:40.119 14:21:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:40.119 14:21:21 -- nvmf/common.sh@471 -- # waitforlisten 3173237 00:16:40.119 14:21:21 -- common/autotest_common.sh@817 -- # '[' -z 3173237 ']' 00:16:40.119 14:21:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.119 14:21:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:40.119 14:21:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.119 14:21:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:40.119 14:21:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.119 [2024-04-26 14:21:21.546408] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:16:40.119 [2024-04-26 14:21:21.546496] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.119 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.119 [2024-04-26 14:21:21.610878] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:40.378 [2024-04-26 14:21:21.726422] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.378 [2024-04-26 14:21:21.726479] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.378 [2024-04-26 14:21:21.726494] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.378 [2024-04-26 14:21:21.726508] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.378 [2024-04-26 14:21:21.726520] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.378 [2024-04-26 14:21:21.726622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.378 [2024-04-26 14:21:21.726667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.378 [2024-04-26 14:21:21.726672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.378 14:21:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:40.378 14:21:21 -- common/autotest_common.sh@850 -- # return 0 00:16:40.378 14:21:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:40.378 14:21:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:40.378 14:21:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.378 14:21:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.378 14:21:21 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:40.378 14:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.378 14:21:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.378 [2024-04-26 14:21:21.868176] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.378 14:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.378 14:21:21 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:40.378 14:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.378 14:21:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.378 Malloc0 00:16:40.378 14:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.378 14:21:21 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:40.378 14:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.378 14:21:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.378 14:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.378 14:21:21 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:40.378 14:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.378 14:21:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.378 14:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.378 14:21:21 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.378 14:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.378 14:21:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.378 [2024-04-26 14:21:21.930441] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.378 14:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.378 14:21:21 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:40.378 14:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.378 14:21:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.378 [2024-04-26 14:21:21.938365] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:40.378 14:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.378 14:21:21 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:40.378 14:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.378 14:21:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.637 Malloc1 00:16:40.637 14:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.637 14:21:21 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:40.637 14:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.637 14:21:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.637 14:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.637 14:21:21 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:16:40.637 14:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.637 14:21:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.637 14:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.637 14:21:21 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:40.637 14:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.637 14:21:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.637 14:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.637 14:21:21 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:16:40.637 14:21:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.637 14:21:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.637 14:21:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.637 14:21:21 -- host/multicontroller.sh@44 -- # bdevperf_pid=3173356 00:16:40.637 14:21:21 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:16:40.637 14:21:21 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:40.637 14:21:21 -- host/multicontroller.sh@47 -- # waitforlisten 3173356 /var/tmp/bdevperf.sock 00:16:40.637 14:21:21 -- common/autotest_common.sh@817 -- # '[' -z 3173356 ']' 00:16:40.637 14:21:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:40.637 14:21:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:40.637 14:21:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:40.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:40.637 14:21:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:40.637 14:21:21 -- common/autotest_common.sh@10 -- # set +x 00:16:40.895 14:21:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:40.895 14:21:22 -- common/autotest_common.sh@850 -- # return 0 00:16:40.895 14:21:22 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:40.895 14:21:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.895 14:21:22 -- common/autotest_common.sh@10 -- # set +x 00:16:41.154 NVMe0n1 00:16:41.154 14:21:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.154 14:21:22 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:41.154 14:21:22 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:16:41.154 14:21:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.154 14:21:22 -- common/autotest_common.sh@10 -- # set +x 00:16:41.154 14:21:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.154 1 00:16:41.154 14:21:22 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:41.154 14:21:22 -- common/autotest_common.sh@638 -- # local es=0 00:16:41.154 14:21:22 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:41.154 14:21:22 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:41.154 14:21:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:41.154 14:21:22 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:41.154 14:21:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:41.154 14:21:22 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:41.154 14:21:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.154 14:21:22 -- common/autotest_common.sh@10 -- # set +x 00:16:41.154 request: 00:16:41.154 { 00:16:41.154 "name": "NVMe0", 00:16:41.154 "trtype": "tcp", 00:16:41.154 "traddr": "10.0.0.2", 00:16:41.154 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:16:41.154 "hostaddr": "10.0.0.2", 00:16:41.154 "hostsvcid": "60000", 00:16:41.154 "adrfam": "ipv4", 00:16:41.154 "trsvcid": "4420", 00:16:41.154 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:41.154 "method": "bdev_nvme_attach_controller", 00:16:41.154 "req_id": 1 00:16:41.154 } 00:16:41.154 Got JSON-RPC error response 00:16:41.154 response: 00:16:41.154 { 00:16:41.154 "code": -114, 00:16:41.154 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:16:41.154 } 00:16:41.154 14:21:22 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:41.154 14:21:22 -- common/autotest_common.sh@641 -- # es=1 00:16:41.154 14:21:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:41.154 14:21:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:41.154 14:21:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:41.154 14:21:22 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:41.154 14:21:22 -- common/autotest_common.sh@638 -- # local es=0 00:16:41.154 14:21:22 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:41.154 14:21:22 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:41.154 14:21:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:41.154 14:21:22 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:41.154 14:21:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:41.154 14:21:22 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:41.154 14:21:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.154 14:21:22 -- common/autotest_common.sh@10 -- # set +x 00:16:41.154 request: 00:16:41.154 { 00:16:41.154 "name": "NVMe0", 00:16:41.154 "trtype": "tcp", 00:16:41.154 "traddr": "10.0.0.2", 00:16:41.154 "hostaddr": "10.0.0.2", 00:16:41.154 "hostsvcid": "60000", 00:16:41.154 "adrfam": "ipv4", 00:16:41.154 "trsvcid": "4420", 00:16:41.154 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:41.154 "method": "bdev_nvme_attach_controller", 00:16:41.154 "req_id": 1 00:16:41.154 } 00:16:41.154 Got JSON-RPC error response 00:16:41.154 response: 00:16:41.154 { 00:16:41.154 "code": -114, 00:16:41.154 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:16:41.154 } 00:16:41.154 14:21:22 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:41.154 14:21:22 -- common/autotest_common.sh@641 -- # es=1 00:16:41.154 14:21:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:41.154 14:21:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:41.154 14:21:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:41.154 14:21:22 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:41.154 14:21:22 -- common/autotest_common.sh@638 -- # local es=0 00:16:41.154 14:21:22 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:41.154 14:21:22 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:41.154 14:21:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:41.154 14:21:22 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:41.154 14:21:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:41.154 14:21:22 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:41.154 14:21:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.154 14:21:22 -- common/autotest_common.sh@10 -- # set +x 00:16:41.154 request: 00:16:41.154 { 00:16:41.154 "name": "NVMe0", 00:16:41.154 "trtype": "tcp", 00:16:41.154 "traddr": "10.0.0.2", 00:16:41.154 "hostaddr": "10.0.0.2", 00:16:41.154 "hostsvcid": "60000", 00:16:41.154 "adrfam": "ipv4", 00:16:41.154 "trsvcid": "4420", 00:16:41.154 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:41.154 "multipath": "disable", 00:16:41.154 "method": "bdev_nvme_attach_controller", 00:16:41.154 "req_id": 1 00:16:41.154 } 00:16:41.154 Got JSON-RPC error response 00:16:41.154 response: 00:16:41.154 { 00:16:41.154 "code": -114, 00:16:41.154 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:16:41.154 } 00:16:41.154 14:21:22 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:41.154 14:21:22 -- common/autotest_common.sh@641 -- # es=1 00:16:41.154 14:21:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:41.154 14:21:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:41.154 14:21:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:41.154 14:21:22 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:41.154 14:21:22 -- common/autotest_common.sh@638 -- # local es=0 00:16:41.154 14:21:22 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:41.154 14:21:22 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:41.154 14:21:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:41.154 14:21:22 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:41.154 14:21:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:41.154 14:21:22 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:41.154 14:21:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.154 14:21:22 -- common/autotest_common.sh@10 -- # set +x 00:16:41.154 request: 00:16:41.154 { 00:16:41.154 "name": "NVMe0", 00:16:41.154 "trtype": "tcp", 00:16:41.154 "traddr": "10.0.0.2", 00:16:41.154 "hostaddr": "10.0.0.2", 00:16:41.154 "hostsvcid": "60000", 00:16:41.154 "adrfam": "ipv4", 00:16:41.154 "trsvcid": "4420", 00:16:41.154 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:41.154 "multipath": "failover", 00:16:41.154 "method": "bdev_nvme_attach_controller", 00:16:41.154 "req_id": 1 00:16:41.154 } 00:16:41.154 Got JSON-RPC error response 00:16:41.154 response: 00:16:41.154 { 00:16:41.154 "code": -114, 00:16:41.154 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:16:41.154 } 00:16:41.154 14:21:22 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:41.154 14:21:22 -- common/autotest_common.sh@641 -- # es=1 00:16:41.154 14:21:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:41.154 14:21:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:41.154 14:21:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:41.154 14:21:22 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:41.154 14:21:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.154 14:21:22 -- common/autotest_common.sh@10 -- # set +x 00:16:41.413 00:16:41.413 14:21:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.413 14:21:22 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:41.413 14:21:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.413 14:21:22 -- common/autotest_common.sh@10 -- # set +x 00:16:41.413 14:21:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.413 14:21:22 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:41.413 14:21:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.413 14:21:22 -- common/autotest_common.sh@10 -- # set +x 00:16:41.413 00:16:41.413 14:21:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.413 14:21:22 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:41.413 14:21:22 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:16:41.413 14:21:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.413 14:21:22 -- common/autotest_common.sh@10 -- # set +x 00:16:41.413 14:21:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.413 14:21:22 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:16:41.413 14:21:22 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:42.786 0 00:16:42.786 14:21:23 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:16:42.786 14:21:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.786 14:21:23 -- common/autotest_common.sh@10 -- # set +x 00:16:42.786 14:21:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.786 14:21:23 -- host/multicontroller.sh@100 -- # killprocess 3173356 00:16:42.786 14:21:23 -- common/autotest_common.sh@936 -- # '[' -z 3173356 ']' 00:16:42.786 14:21:23 -- common/autotest_common.sh@940 -- # kill -0 3173356 00:16:42.786 14:21:23 -- common/autotest_common.sh@941 -- # uname 00:16:42.786 14:21:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.786 14:21:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3173356 00:16:42.786 14:21:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:42.786 14:21:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:42.786 14:21:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3173356' 00:16:42.786 killing process with pid 3173356 00:16:42.786 14:21:24 -- common/autotest_common.sh@955 -- # kill 3173356 00:16:42.786 14:21:24 -- common/autotest_common.sh@960 -- # wait 3173356 00:16:42.786 14:21:24 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:42.786 14:21:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.786 14:21:24 -- common/autotest_common.sh@10 -- # set +x 00:16:42.786 14:21:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.786 14:21:24 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:42.786 14:21:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.786 14:21:24 -- common/autotest_common.sh@10 -- # set +x 00:16:42.786 14:21:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.786 14:21:24 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:16:42.786 14:21:24 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:16:42.786 14:21:24 -- common/autotest_common.sh@1598 -- # read -r file 00:16:42.786 14:21:24 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:16:42.786 14:21:24 -- common/autotest_common.sh@1597 -- # sort -u 00:16:42.786 14:21:24 -- common/autotest_common.sh@1599 -- # cat 00:16:42.786 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:16:42.786 [2024-04-26 14:21:22.039749] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:16:42.786 [2024-04-26 14:21:22.039863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3173356 ] 00:16:42.786 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.786 [2024-04-26 14:21:22.100170] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.786 [2024-04-26 14:21:22.215000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.786 [2024-04-26 14:21:22.805569] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name c3122f5a-cd49-41d6-84b3-e37232e966d4 already exists 00:16:42.786 [2024-04-26 14:21:22.805610] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:c3122f5a-cd49-41d6-84b3-e37232e966d4 alias for bdev NVMe1n1 00:16:42.786 [2024-04-26 14:21:22.805637] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:16:42.786 Running I/O for 1 seconds... 00:16:42.786 00:16:42.786 Latency(us) 00:16:42.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.786 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:16:42.786 NVMe0n1 : 1.01 16391.83 64.03 0.00 0.00 7795.42 6699.24 15049.01 00:16:42.786 =================================================================================================================== 00:16:42.786 Total : 16391.83 64.03 0.00 0.00 7795.42 6699.24 15049.01 00:16:42.786 Received shutdown signal, test time was about 1.000000 seconds 00:16:42.786 00:16:42.786 Latency(us) 00:16:42.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.786 =================================================================================================================== 00:16:42.786 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:42.786 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:16:42.786 14:21:24 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:16:42.786 14:21:24 -- common/autotest_common.sh@1598 -- # read -r file 00:16:42.786 14:21:24 -- host/multicontroller.sh@108 -- # nvmftestfini 00:16:42.786 14:21:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:42.786 14:21:24 -- nvmf/common.sh@117 -- # sync 00:16:42.786 14:21:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:42.786 14:21:24 -- nvmf/common.sh@120 -- # set +e 00:16:42.786 14:21:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.786 14:21:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:42.786 rmmod nvme_tcp 00:16:42.786 rmmod nvme_fabrics 00:16:42.786 rmmod nvme_keyring 00:16:42.787 14:21:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:42.787 14:21:24 -- nvmf/common.sh@124 -- # set -e 00:16:42.787 14:21:24 -- nvmf/common.sh@125 -- # return 0 00:16:42.787 14:21:24 -- nvmf/common.sh@478 -- # '[' -n 3173237 ']' 00:16:42.787 14:21:24 -- nvmf/common.sh@479 -- # killprocess 3173237 00:16:42.787 14:21:24 -- common/autotest_common.sh@936 -- # '[' -z 3173237 ']' 00:16:42.787 14:21:24 -- common/autotest_common.sh@940 -- # kill -0 3173237 00:16:42.787 14:21:24 -- common/autotest_common.sh@941 -- # uname 00:16:42.787 14:21:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.787 14:21:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3173237 00:16:42.787 14:21:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:42.787 14:21:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:42.787 14:21:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3173237' 00:16:42.787 killing process with pid 3173237 00:16:42.787 14:21:24 -- common/autotest_common.sh@955 -- # kill 3173237 00:16:42.787 14:21:24 -- common/autotest_common.sh@960 -- # wait 3173237 00:16:43.045 14:21:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:43.045 14:21:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:43.045 14:21:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:43.045 14:21:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.045 14:21:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.045 14:21:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.045 14:21:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.045 14:21:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.585 14:21:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:45.585 00:16:45.585 real 0m6.938s 00:16:45.585 user 0m11.382s 00:16:45.585 sys 0m1.963s 00:16:45.585 14:21:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:45.585 14:21:26 -- common/autotest_common.sh@10 -- # set +x 00:16:45.585 ************************************ 00:16:45.585 END TEST nvmf_multicontroller 00:16:45.585 ************************************ 00:16:45.585 14:21:26 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:45.585 14:21:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:45.585 14:21:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:45.585 14:21:26 -- common/autotest_common.sh@10 -- # set +x 00:16:45.585 ************************************ 00:16:45.585 START TEST nvmf_aer 00:16:45.585 ************************************ 00:16:45.585 14:21:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:45.585 * Looking for test storage... 00:16:45.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:16:45.585 14:21:26 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:45.585 14:21:26 -- nvmf/common.sh@7 -- # uname -s 00:16:45.585 14:21:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.585 14:21:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.585 14:21:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.585 14:21:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.585 14:21:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.585 14:21:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.585 14:21:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.585 14:21:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.585 14:21:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.585 14:21:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.585 14:21:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:45.585 14:21:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:16:45.585 14:21:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.585 14:21:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.585 14:21:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:45.585 14:21:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.585 14:21:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:45.585 14:21:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.585 14:21:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.585 14:21:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.585 14:21:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.585 14:21:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.585 14:21:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.585 14:21:26 -- paths/export.sh@5 -- # export PATH 00:16:45.585 14:21:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.585 14:21:26 -- nvmf/common.sh@47 -- # : 0 00:16:45.585 14:21:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:45.585 14:21:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:45.585 14:21:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.585 14:21:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.585 14:21:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.585 14:21:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:45.585 14:21:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:45.585 14:21:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:45.585 14:21:26 -- host/aer.sh@11 -- # nvmftestinit 00:16:45.585 14:21:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:45.585 14:21:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.585 14:21:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:45.585 14:21:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:45.585 14:21:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:45.585 14:21:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.585 14:21:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.585 14:21:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.585 14:21:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:45.585 14:21:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:45.585 14:21:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:45.585 14:21:26 -- common/autotest_common.sh@10 -- # set +x 00:16:46.958 14:21:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:46.958 14:21:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:46.958 14:21:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:46.958 14:21:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:46.958 14:21:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:46.958 14:21:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:46.958 14:21:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:46.958 14:21:28 -- nvmf/common.sh@295 -- # net_devs=() 00:16:46.958 14:21:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:46.958 14:21:28 -- nvmf/common.sh@296 -- # e810=() 00:16:46.958 14:21:28 -- nvmf/common.sh@296 -- # local -ga e810 00:16:46.958 14:21:28 -- nvmf/common.sh@297 -- # x722=() 00:16:46.958 14:21:28 -- nvmf/common.sh@297 -- # local -ga x722 00:16:46.958 14:21:28 -- nvmf/common.sh@298 -- # mlx=() 00:16:46.958 14:21:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:46.959 14:21:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:46.959 14:21:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:46.959 14:21:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:46.959 14:21:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:46.959 14:21:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:46.959 14:21:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:46.959 14:21:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:46.959 14:21:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:46.959 14:21:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:46.959 14:21:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:46.959 14:21:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:46.959 14:21:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:46.959 14:21:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:46.959 14:21:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:46.959 14:21:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:46.959 14:21:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:46.959 14:21:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:46.959 14:21:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:46.959 14:21:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:16:46.959 Found 0000:08:00.0 (0x8086 - 0x159b) 00:16:46.959 14:21:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:46.959 14:21:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:46.959 14:21:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.959 14:21:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.959 14:21:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:46.959 14:21:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:46.959 14:21:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:16:46.959 Found 0000:08:00.1 (0x8086 - 0x159b) 00:16:46.959 14:21:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:46.959 14:21:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:46.959 14:21:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.959 14:21:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.959 14:21:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:46.959 14:21:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:46.959 14:21:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:46.959 14:21:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:46.959 14:21:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:46.959 14:21:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.959 14:21:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:46.959 14:21:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.959 14:21:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:16:46.959 Found net devices under 0000:08:00.0: cvl_0_0 00:16:46.959 14:21:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.959 14:21:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:46.959 14:21:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.959 14:21:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:46.959 14:21:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.959 14:21:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:16:46.959 Found net devices under 0000:08:00.1: cvl_0_1 00:16:46.959 14:21:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.959 14:21:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:46.959 14:21:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:46.959 14:21:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:46.959 14:21:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:46.959 14:21:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:46.959 14:21:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.959 14:21:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:46.959 14:21:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:46.959 14:21:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:46.959 14:21:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:46.959 14:21:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:46.959 14:21:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:46.959 14:21:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:46.959 14:21:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.959 14:21:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:46.959 14:21:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:46.959 14:21:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:46.959 14:21:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:47.216 14:21:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:47.216 14:21:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:47.216 14:21:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:47.216 14:21:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:47.216 14:21:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:47.216 14:21:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:47.216 14:21:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:47.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:16:47.216 00:16:47.216 --- 10.0.0.2 ping statistics --- 00:16:47.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.216 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:16:47.216 14:21:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:47.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:47.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:16:47.216 00:16:47.216 --- 10.0.0.1 ping statistics --- 00:16:47.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.216 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:16:47.216 14:21:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:47.216 14:21:28 -- nvmf/common.sh@411 -- # return 0 00:16:47.216 14:21:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:47.216 14:21:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:47.216 14:21:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:47.216 14:21:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:47.216 14:21:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:47.216 14:21:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:47.216 14:21:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:47.216 14:21:28 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:16:47.216 14:21:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:47.216 14:21:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:47.216 14:21:28 -- common/autotest_common.sh@10 -- # set +x 00:16:47.216 14:21:28 -- nvmf/common.sh@470 -- # nvmfpid=3175065 00:16:47.216 14:21:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:47.216 14:21:28 -- nvmf/common.sh@471 -- # waitforlisten 3175065 00:16:47.216 14:21:28 -- common/autotest_common.sh@817 -- # '[' -z 3175065 ']' 00:16:47.216 14:21:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.216 14:21:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:47.216 14:21:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.216 14:21:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:47.216 14:21:28 -- common/autotest_common.sh@10 -- # set +x 00:16:47.216 [2024-04-26 14:21:28.682886] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:16:47.216 [2024-04-26 14:21:28.682975] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.216 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.216 [2024-04-26 14:21:28.748362] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:47.474 [2024-04-26 14:21:28.864352] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.474 [2024-04-26 14:21:28.864411] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.474 [2024-04-26 14:21:28.864436] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.474 [2024-04-26 14:21:28.864456] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.474 [2024-04-26 14:21:28.864477] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.474 [2024-04-26 14:21:28.864566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.474 [2024-04-26 14:21:28.864642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.474 [2024-04-26 14:21:28.864680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:47.474 [2024-04-26 14:21:28.864688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.474 14:21:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:47.474 14:21:28 -- common/autotest_common.sh@850 -- # return 0 00:16:47.474 14:21:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:47.474 14:21:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:47.474 14:21:28 -- common/autotest_common.sh@10 -- # set +x 00:16:47.474 14:21:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.474 14:21:29 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:47.474 14:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.474 14:21:29 -- common/autotest_common.sh@10 -- # set +x 00:16:47.474 [2024-04-26 14:21:29.009192] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.474 14:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.474 14:21:29 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:16:47.474 14:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.474 14:21:29 -- common/autotest_common.sh@10 -- # set +x 00:16:47.474 Malloc0 00:16:47.474 14:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.474 14:21:29 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:16:47.474 14:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.474 14:21:29 -- common/autotest_common.sh@10 -- # set +x 00:16:47.731 14:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.731 14:21:29 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:47.731 14:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.731 14:21:29 -- common/autotest_common.sh@10 -- # set +x 00:16:47.731 14:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.731 14:21:29 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:47.731 14:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.731 14:21:29 -- common/autotest_common.sh@10 -- # set +x 00:16:47.731 [2024-04-26 14:21:29.057907] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.731 14:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.731 14:21:29 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:16:47.731 14:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.731 14:21:29 -- common/autotest_common.sh@10 -- # set +x 00:16:47.731 [2024-04-26 14:21:29.065689] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:16:47.731 [ 00:16:47.731 { 00:16:47.731 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:47.731 "subtype": "Discovery", 00:16:47.731 "listen_addresses": [], 00:16:47.731 "allow_any_host": true, 00:16:47.731 "hosts": [] 00:16:47.731 }, 00:16:47.731 { 00:16:47.731 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:47.731 "subtype": "NVMe", 00:16:47.731 "listen_addresses": [ 00:16:47.731 { 00:16:47.731 "transport": "TCP", 00:16:47.731 "trtype": "TCP", 00:16:47.731 "adrfam": "IPv4", 00:16:47.731 "traddr": "10.0.0.2", 00:16:47.731 "trsvcid": "4420" 00:16:47.731 } 00:16:47.731 ], 00:16:47.731 "allow_any_host": true, 00:16:47.731 "hosts": [], 00:16:47.731 "serial_number": "SPDK00000000000001", 00:16:47.731 "model_number": "SPDK bdev Controller", 00:16:47.731 "max_namespaces": 2, 00:16:47.731 "min_cntlid": 1, 00:16:47.731 "max_cntlid": 65519, 00:16:47.731 "namespaces": [ 00:16:47.731 { 00:16:47.731 "nsid": 1, 00:16:47.731 "bdev_name": "Malloc0", 00:16:47.731 "name": "Malloc0", 00:16:47.731 "nguid": "CA94133ECA854A5D82EB631D97751120", 00:16:47.731 "uuid": "ca94133e-ca85-4a5d-82eb-631d97751120" 00:16:47.731 } 00:16:47.731 ] 00:16:47.731 } 00:16:47.731 ] 00:16:47.731 14:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.731 14:21:29 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:47.731 14:21:29 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:16:47.731 14:21:29 -- host/aer.sh@33 -- # aerpid=3175103 00:16:47.731 14:21:29 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:16:47.731 14:21:29 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:16:47.731 14:21:29 -- common/autotest_common.sh@1251 -- # local i=0 00:16:47.731 14:21:29 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:47.731 14:21:29 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:16:47.731 14:21:29 -- common/autotest_common.sh@1254 -- # i=1 00:16:47.731 14:21:29 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:16:47.731 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.731 14:21:29 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:47.731 14:21:29 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:16:47.731 14:21:29 -- common/autotest_common.sh@1254 -- # i=2 00:16:47.731 14:21:29 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:16:47.732 14:21:29 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:47.732 14:21:29 -- common/autotest_common.sh@1253 -- # '[' 2 -lt 200 ']' 00:16:47.732 14:21:29 -- common/autotest_common.sh@1254 -- # i=3 00:16:47.732 14:21:29 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:16:47.989 14:21:29 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:47.989 14:21:29 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:47.989 14:21:29 -- common/autotest_common.sh@1262 -- # return 0 00:16:47.989 14:21:29 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:16:47.989 14:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.989 14:21:29 -- common/autotest_common.sh@10 -- # set +x 00:16:47.989 Malloc1 00:16:47.989 14:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.989 14:21:29 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:16:47.989 14:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.989 14:21:29 -- common/autotest_common.sh@10 -- # set +x 00:16:47.989 14:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.989 14:21:29 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:16:47.989 14:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.989 14:21:29 -- common/autotest_common.sh@10 -- # set +x 00:16:47.989 [ 00:16:47.989 { 00:16:47.989 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:47.989 "subtype": "Discovery", 00:16:47.989 "listen_addresses": [], 00:16:47.989 "allow_any_host": true, 00:16:47.989 "hosts": [] 00:16:47.989 }, 00:16:47.989 { 00:16:47.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:47.989 "subtype": "NVMe", 00:16:47.989 "listen_addresses": [ 00:16:47.989 { 00:16:47.989 "transport": "TCP", 00:16:47.989 "trtype": "TCP", 00:16:47.989 "adrfam": "IPv4", 00:16:47.989 "traddr": "10.0.0.2", 00:16:47.989 "trsvcid": "4420" 00:16:47.989 } 00:16:47.989 ], 00:16:47.989 "allow_any_host": true, 00:16:47.989 "hosts": [], 00:16:47.990 "serial_number": "SPDK00000000000001", 00:16:47.990 "model_number": "SPDK bdev Controller", 00:16:47.990 "max_namespaces": 2, 00:16:47.990 "min_cntlid": 1, 00:16:47.990 "max_cntlid": 65519, 00:16:47.990 "namespaces": [ 00:16:47.990 { 00:16:47.990 "nsid": 1, 00:16:47.990 "bdev_name": "Malloc0", 00:16:47.990 "name": "Malloc0", 00:16:47.990 "nguid": "CA94133ECA854A5D82EB631D97751120", 00:16:47.990 "uuid": "ca94133e-ca85-4a5d-82eb-631d97751120" 00:16:47.990 }, 00:16:47.990 { 00:16:47.990 "nsid": 2, 00:16:47.990 "bdev_name": "Malloc1", 00:16:47.990 "name": "Malloc1", 00:16:47.990 "nguid": "BE18435091A1462FA5D61B8685318163", 00:16:47.990 "uuid": "be184350-91a1-462f-a5d6-1b8685318163" 00:16:47.990 } 00:16:47.990 ] 00:16:47.990 } 00:16:47.990 ] 00:16:47.990 14:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.990 Asynchronous Event Request test 00:16:47.990 Attaching to 10.0.0.2 00:16:47.990 Attached to 10.0.0.2 00:16:47.990 Registering asynchronous event callbacks... 00:16:47.990 Starting namespace attribute notice tests for all controllers... 00:16:47.990 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:47.990 aer_cb - Changed Namespace 00:16:47.990 Cleaning up... 00:16:47.990 14:21:29 -- host/aer.sh@43 -- # wait 3175103 00:16:47.990 14:21:29 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:47.990 14:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.990 14:21:29 -- common/autotest_common.sh@10 -- # set +x 00:16:47.990 14:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.990 14:21:29 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:47.990 14:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.990 14:21:29 -- common/autotest_common.sh@10 -- # set +x 00:16:47.990 14:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.990 14:21:29 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.990 14:21:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.990 14:21:29 -- common/autotest_common.sh@10 -- # set +x 00:16:47.990 14:21:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.990 14:21:29 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:16:47.990 14:21:29 -- host/aer.sh@51 -- # nvmftestfini 00:16:47.990 14:21:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:47.990 14:21:29 -- nvmf/common.sh@117 -- # sync 00:16:47.990 14:21:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:47.990 14:21:29 -- nvmf/common.sh@120 -- # set +e 00:16:47.990 14:21:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:47.990 14:21:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:47.990 rmmod nvme_tcp 00:16:47.990 rmmod nvme_fabrics 00:16:47.990 rmmod nvme_keyring 00:16:48.248 14:21:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:48.248 14:21:29 -- nvmf/common.sh@124 -- # set -e 00:16:48.248 14:21:29 -- nvmf/common.sh@125 -- # return 0 00:16:48.248 14:21:29 -- nvmf/common.sh@478 -- # '[' -n 3175065 ']' 00:16:48.248 14:21:29 -- nvmf/common.sh@479 -- # killprocess 3175065 00:16:48.248 14:21:29 -- common/autotest_common.sh@936 -- # '[' -z 3175065 ']' 00:16:48.248 14:21:29 -- common/autotest_common.sh@940 -- # kill -0 3175065 00:16:48.248 14:21:29 -- common/autotest_common.sh@941 -- # uname 00:16:48.248 14:21:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:48.248 14:21:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3175065 00:16:48.248 14:21:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:48.248 14:21:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:48.248 14:21:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3175065' 00:16:48.248 killing process with pid 3175065 00:16:48.248 14:21:29 -- common/autotest_common.sh@955 -- # kill 3175065 00:16:48.248 [2024-04-26 14:21:29.601841] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:48.248 14:21:29 -- common/autotest_common.sh@960 -- # wait 3175065 00:16:48.507 14:21:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:48.507 14:21:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:48.507 14:21:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:48.507 14:21:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:48.507 14:21:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:48.507 14:21:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.507 14:21:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.507 14:21:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.414 14:21:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:50.414 00:16:50.414 real 0m5.086s 00:16:50.415 user 0m4.282s 00:16:50.415 sys 0m1.664s 00:16:50.415 14:21:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:50.415 14:21:31 -- common/autotest_common.sh@10 -- # set +x 00:16:50.415 ************************************ 00:16:50.415 END TEST nvmf_aer 00:16:50.415 ************************************ 00:16:50.415 14:21:31 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:50.415 14:21:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:50.415 14:21:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:50.415 14:21:31 -- common/autotest_common.sh@10 -- # set +x 00:16:50.673 ************************************ 00:16:50.673 START TEST nvmf_async_init 00:16:50.673 ************************************ 00:16:50.673 14:21:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:50.673 * Looking for test storage... 00:16:50.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:16:50.673 14:21:32 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.673 14:21:32 -- nvmf/common.sh@7 -- # uname -s 00:16:50.673 14:21:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.673 14:21:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.673 14:21:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.673 14:21:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.673 14:21:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.674 14:21:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.674 14:21:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.674 14:21:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.674 14:21:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.674 14:21:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.674 14:21:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:50.674 14:21:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:16:50.674 14:21:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.674 14:21:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.674 14:21:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.674 14:21:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.674 14:21:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.674 14:21:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.674 14:21:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.674 14:21:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.674 14:21:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.674 14:21:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.674 14:21:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.674 14:21:32 -- paths/export.sh@5 -- # export PATH 00:16:50.674 14:21:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.674 14:21:32 -- nvmf/common.sh@47 -- # : 0 00:16:50.674 14:21:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:50.674 14:21:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:50.674 14:21:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.674 14:21:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.674 14:21:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.674 14:21:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:50.674 14:21:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:50.674 14:21:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:50.674 14:21:32 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:16:50.674 14:21:32 -- host/async_init.sh@14 -- # null_block_size=512 00:16:50.674 14:21:32 -- host/async_init.sh@15 -- # null_bdev=null0 00:16:50.674 14:21:32 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:16:50.674 14:21:32 -- host/async_init.sh@20 -- # uuidgen 00:16:50.674 14:21:32 -- host/async_init.sh@20 -- # tr -d - 00:16:50.674 14:21:32 -- host/async_init.sh@20 -- # nguid=0d1c23b973b94cc1bc3907690fe6d669 00:16:50.674 14:21:32 -- host/async_init.sh@22 -- # nvmftestinit 00:16:50.674 14:21:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:50.674 14:21:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.674 14:21:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:50.674 14:21:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:50.674 14:21:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:50.674 14:21:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.674 14:21:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.674 14:21:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.674 14:21:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:50.674 14:21:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:50.674 14:21:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:50.674 14:21:32 -- common/autotest_common.sh@10 -- # set +x 00:16:52.577 14:21:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:52.577 14:21:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:52.577 14:21:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:52.577 14:21:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:52.577 14:21:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:52.577 14:21:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:52.577 14:21:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:52.577 14:21:33 -- nvmf/common.sh@295 -- # net_devs=() 00:16:52.577 14:21:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:52.577 14:21:33 -- nvmf/common.sh@296 -- # e810=() 00:16:52.577 14:21:33 -- nvmf/common.sh@296 -- # local -ga e810 00:16:52.577 14:21:33 -- nvmf/common.sh@297 -- # x722=() 00:16:52.577 14:21:33 -- nvmf/common.sh@297 -- # local -ga x722 00:16:52.577 14:21:33 -- nvmf/common.sh@298 -- # mlx=() 00:16:52.577 14:21:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:52.577 14:21:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:52.577 14:21:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:52.577 14:21:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:52.577 14:21:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:52.577 14:21:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:52.577 14:21:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:52.577 14:21:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:52.577 14:21:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:52.577 14:21:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:52.577 14:21:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:52.577 14:21:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:52.577 14:21:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:52.577 14:21:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:52.577 14:21:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:52.577 14:21:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:52.577 14:21:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:52.577 14:21:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:52.577 14:21:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:52.577 14:21:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:16:52.577 Found 0000:08:00.0 (0x8086 - 0x159b) 00:16:52.577 14:21:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:52.577 14:21:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:52.577 14:21:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.577 14:21:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.577 14:21:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:52.577 14:21:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:52.577 14:21:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:16:52.577 Found 0000:08:00.1 (0x8086 - 0x159b) 00:16:52.577 14:21:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:52.577 14:21:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:52.577 14:21:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.577 14:21:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.577 14:21:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:52.577 14:21:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:52.577 14:21:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:52.577 14:21:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:52.577 14:21:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:52.577 14:21:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.577 14:21:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:52.577 14:21:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.577 14:21:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:16:52.577 Found net devices under 0000:08:00.0: cvl_0_0 00:16:52.577 14:21:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.577 14:21:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:52.577 14:21:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.577 14:21:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:52.577 14:21:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.577 14:21:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:16:52.577 Found net devices under 0000:08:00.1: cvl_0_1 00:16:52.577 14:21:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.577 14:21:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:52.577 14:21:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:52.577 14:21:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:52.577 14:21:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:52.577 14:21:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:52.577 14:21:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.578 14:21:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:52.578 14:21:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:52.578 14:21:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:52.578 14:21:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:52.578 14:21:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:52.578 14:21:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:52.578 14:21:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:52.578 14:21:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.578 14:21:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:52.578 14:21:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:52.578 14:21:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:52.578 14:21:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:52.578 14:21:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:52.578 14:21:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:52.578 14:21:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:52.578 14:21:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:52.578 14:21:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:52.578 14:21:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:52.578 14:21:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:52.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:16:52.578 00:16:52.578 --- 10.0.0.2 ping statistics --- 00:16:52.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.578 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:16:52.578 14:21:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:52.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:16:52.578 00:16:52.578 --- 10.0.0.1 ping statistics --- 00:16:52.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.578 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:16:52.578 14:21:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.578 14:21:33 -- nvmf/common.sh@411 -- # return 0 00:16:52.578 14:21:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:52.578 14:21:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.578 14:21:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:52.578 14:21:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:52.578 14:21:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.578 14:21:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:52.578 14:21:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:52.578 14:21:33 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:16:52.578 14:21:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:52.578 14:21:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:52.578 14:21:33 -- common/autotest_common.sh@10 -- # set +x 00:16:52.578 14:21:33 -- nvmf/common.sh@470 -- # nvmfpid=3176664 00:16:52.578 14:21:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:52.578 14:21:33 -- nvmf/common.sh@471 -- # waitforlisten 3176664 00:16:52.578 14:21:33 -- common/autotest_common.sh@817 -- # '[' -z 3176664 ']' 00:16:52.578 14:21:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.578 14:21:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:52.578 14:21:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.578 14:21:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:52.578 14:21:33 -- common/autotest_common.sh@10 -- # set +x 00:16:52.578 [2024-04-26 14:21:33.895436] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:16:52.578 [2024-04-26 14:21:33.895518] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.578 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.578 [2024-04-26 14:21:33.959078] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.578 [2024-04-26 14:21:34.073355] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.578 [2024-04-26 14:21:34.073418] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.578 [2024-04-26 14:21:34.073440] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.578 [2024-04-26 14:21:34.073471] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.578 [2024-04-26 14:21:34.073491] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.578 [2024-04-26 14:21:34.073532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.837 14:21:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:52.837 14:21:34 -- common/autotest_common.sh@850 -- # return 0 00:16:52.837 14:21:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:52.837 14:21:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:52.837 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:52.837 14:21:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.837 14:21:34 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:16:52.837 14:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.837 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:52.837 [2024-04-26 14:21:34.213556] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.837 14:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.837 14:21:34 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:16:52.837 14:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.837 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:52.837 null0 00:16:52.837 14:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.837 14:21:34 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:16:52.837 14:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.837 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:52.837 14:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.837 14:21:34 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:16:52.837 14:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.837 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:52.837 14:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.837 14:21:34 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0d1c23b973b94cc1bc3907690fe6d669 00:16:52.837 14:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.837 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:52.837 14:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.837 14:21:34 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:52.837 14:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.837 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:52.837 [2024-04-26 14:21:34.253789] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.837 14:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.837 14:21:34 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:16:52.837 14:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.837 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:53.095 nvme0n1 00:16:53.095 14:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.095 14:21:34 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:53.095 14:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.095 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:53.095 [ 00:16:53.095 { 00:16:53.095 "name": "nvme0n1", 00:16:53.095 "aliases": [ 00:16:53.095 "0d1c23b9-73b9-4cc1-bc39-07690fe6d669" 00:16:53.095 ], 00:16:53.095 "product_name": "NVMe disk", 00:16:53.095 "block_size": 512, 00:16:53.095 "num_blocks": 2097152, 00:16:53.095 "uuid": "0d1c23b9-73b9-4cc1-bc39-07690fe6d669", 00:16:53.095 "assigned_rate_limits": { 00:16:53.095 "rw_ios_per_sec": 0, 00:16:53.095 "rw_mbytes_per_sec": 0, 00:16:53.095 "r_mbytes_per_sec": 0, 00:16:53.095 "w_mbytes_per_sec": 0 00:16:53.095 }, 00:16:53.095 "claimed": false, 00:16:53.095 "zoned": false, 00:16:53.095 "supported_io_types": { 00:16:53.095 "read": true, 00:16:53.095 "write": true, 00:16:53.095 "unmap": false, 00:16:53.095 "write_zeroes": true, 00:16:53.095 "flush": true, 00:16:53.095 "reset": true, 00:16:53.095 "compare": true, 00:16:53.095 "compare_and_write": true, 00:16:53.095 "abort": true, 00:16:53.095 "nvme_admin": true, 00:16:53.095 "nvme_io": true 00:16:53.095 }, 00:16:53.095 "memory_domains": [ 00:16:53.095 { 00:16:53.095 "dma_device_id": "system", 00:16:53.095 "dma_device_type": 1 00:16:53.095 } 00:16:53.095 ], 00:16:53.095 "driver_specific": { 00:16:53.095 "nvme": [ 00:16:53.095 { 00:16:53.095 "trid": { 00:16:53.095 "trtype": "TCP", 00:16:53.095 "adrfam": "IPv4", 00:16:53.095 "traddr": "10.0.0.2", 00:16:53.095 "trsvcid": "4420", 00:16:53.095 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:53.095 }, 00:16:53.095 "ctrlr_data": { 00:16:53.095 "cntlid": 1, 00:16:53.095 "vendor_id": "0x8086", 00:16:53.095 "model_number": "SPDK bdev Controller", 00:16:53.095 "serial_number": "00000000000000000000", 00:16:53.095 "firmware_revision": "24.05", 00:16:53.095 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:53.095 "oacs": { 00:16:53.095 "security": 0, 00:16:53.095 "format": 0, 00:16:53.095 "firmware": 0, 00:16:53.095 "ns_manage": 0 00:16:53.095 }, 00:16:53.095 "multi_ctrlr": true, 00:16:53.095 "ana_reporting": false 00:16:53.095 }, 00:16:53.095 "vs": { 00:16:53.095 "nvme_version": "1.3" 00:16:53.095 }, 00:16:53.095 "ns_data": { 00:16:53.095 "id": 1, 00:16:53.095 "can_share": true 00:16:53.095 } 00:16:53.095 } 00:16:53.095 ], 00:16:53.095 "mp_policy": "active_passive" 00:16:53.095 } 00:16:53.095 } 00:16:53.095 ] 00:16:53.095 14:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.095 14:21:34 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:16:53.095 14:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.095 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:53.095 [2024-04-26 14:21:34.510495] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:53.095 [2024-04-26 14:21:34.510593] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ba310 (9): Bad file descriptor 00:16:53.095 [2024-04-26 14:21:34.642791] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:53.095 14:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.095 14:21:34 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:53.095 14:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.095 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:53.095 [ 00:16:53.095 { 00:16:53.095 "name": "nvme0n1", 00:16:53.095 "aliases": [ 00:16:53.095 "0d1c23b9-73b9-4cc1-bc39-07690fe6d669" 00:16:53.095 ], 00:16:53.095 "product_name": "NVMe disk", 00:16:53.095 "block_size": 512, 00:16:53.095 "num_blocks": 2097152, 00:16:53.095 "uuid": "0d1c23b9-73b9-4cc1-bc39-07690fe6d669", 00:16:53.095 "assigned_rate_limits": { 00:16:53.095 "rw_ios_per_sec": 0, 00:16:53.095 "rw_mbytes_per_sec": 0, 00:16:53.095 "r_mbytes_per_sec": 0, 00:16:53.095 "w_mbytes_per_sec": 0 00:16:53.095 }, 00:16:53.095 "claimed": false, 00:16:53.095 "zoned": false, 00:16:53.095 "supported_io_types": { 00:16:53.095 "read": true, 00:16:53.095 "write": true, 00:16:53.095 "unmap": false, 00:16:53.095 "write_zeroes": true, 00:16:53.095 "flush": true, 00:16:53.095 "reset": true, 00:16:53.095 "compare": true, 00:16:53.095 "compare_and_write": true, 00:16:53.095 "abort": true, 00:16:53.095 "nvme_admin": true, 00:16:53.095 "nvme_io": true 00:16:53.095 }, 00:16:53.095 "memory_domains": [ 00:16:53.095 { 00:16:53.095 "dma_device_id": "system", 00:16:53.095 "dma_device_type": 1 00:16:53.095 } 00:16:53.095 ], 00:16:53.095 "driver_specific": { 00:16:53.095 "nvme": [ 00:16:53.095 { 00:16:53.095 "trid": { 00:16:53.095 "trtype": "TCP", 00:16:53.095 "adrfam": "IPv4", 00:16:53.095 "traddr": "10.0.0.2", 00:16:53.095 "trsvcid": "4420", 00:16:53.095 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:53.095 }, 00:16:53.095 "ctrlr_data": { 00:16:53.095 "cntlid": 2, 00:16:53.095 "vendor_id": "0x8086", 00:16:53.095 "model_number": "SPDK bdev Controller", 00:16:53.095 "serial_number": "00000000000000000000", 00:16:53.095 "firmware_revision": "24.05", 00:16:53.095 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:53.095 "oacs": { 00:16:53.095 "security": 0, 00:16:53.095 "format": 0, 00:16:53.095 "firmware": 0, 00:16:53.095 "ns_manage": 0 00:16:53.095 }, 00:16:53.095 "multi_ctrlr": true, 00:16:53.095 "ana_reporting": false 00:16:53.095 }, 00:16:53.095 "vs": { 00:16:53.095 "nvme_version": "1.3" 00:16:53.095 }, 00:16:53.095 "ns_data": { 00:16:53.095 "id": 1, 00:16:53.095 "can_share": true 00:16:53.095 } 00:16:53.095 } 00:16:53.095 ], 00:16:53.095 "mp_policy": "active_passive" 00:16:53.095 } 00:16:53.095 } 00:16:53.095 ] 00:16:53.095 14:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.095 14:21:34 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.095 14:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.095 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:53.354 14:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.354 14:21:34 -- host/async_init.sh@53 -- # mktemp 00:16:53.354 14:21:34 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.FYDTAbpxWY 00:16:53.354 14:21:34 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:53.354 14:21:34 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.FYDTAbpxWY 00:16:53.354 14:21:34 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:16:53.354 14:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.354 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:53.354 14:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.354 14:21:34 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:16:53.354 14:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.354 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:53.354 [2024-04-26 14:21:34.703132] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:53.354 [2024-04-26 14:21:34.703279] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:53.354 14:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.354 14:21:34 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FYDTAbpxWY 00:16:53.354 14:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.354 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:53.354 [2024-04-26 14:21:34.711155] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:53.354 14:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.354 14:21:34 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FYDTAbpxWY 00:16:53.354 14:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.354 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:53.354 [2024-04-26 14:21:34.719175] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:53.354 [2024-04-26 14:21:34.719241] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:53.354 nvme0n1 00:16:53.354 14:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.354 14:21:34 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:53.354 14:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.354 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:53.354 [ 00:16:53.354 { 00:16:53.354 "name": "nvme0n1", 00:16:53.354 "aliases": [ 00:16:53.354 "0d1c23b9-73b9-4cc1-bc39-07690fe6d669" 00:16:53.354 ], 00:16:53.354 "product_name": "NVMe disk", 00:16:53.354 "block_size": 512, 00:16:53.354 "num_blocks": 2097152, 00:16:53.354 "uuid": "0d1c23b9-73b9-4cc1-bc39-07690fe6d669", 00:16:53.354 "assigned_rate_limits": { 00:16:53.354 "rw_ios_per_sec": 0, 00:16:53.354 "rw_mbytes_per_sec": 0, 00:16:53.354 "r_mbytes_per_sec": 0, 00:16:53.354 "w_mbytes_per_sec": 0 00:16:53.354 }, 00:16:53.354 "claimed": false, 00:16:53.354 "zoned": false, 00:16:53.354 "supported_io_types": { 00:16:53.354 "read": true, 00:16:53.354 "write": true, 00:16:53.354 "unmap": false, 00:16:53.354 "write_zeroes": true, 00:16:53.354 "flush": true, 00:16:53.354 "reset": true, 00:16:53.354 "compare": true, 00:16:53.354 "compare_and_write": true, 00:16:53.354 "abort": true, 00:16:53.354 "nvme_admin": true, 00:16:53.354 "nvme_io": true 00:16:53.354 }, 00:16:53.354 "memory_domains": [ 00:16:53.354 { 00:16:53.354 "dma_device_id": "system", 00:16:53.354 "dma_device_type": 1 00:16:53.354 } 00:16:53.354 ], 00:16:53.354 "driver_specific": { 00:16:53.354 "nvme": [ 00:16:53.354 { 00:16:53.354 "trid": { 00:16:53.354 "trtype": "TCP", 00:16:53.354 "adrfam": "IPv4", 00:16:53.354 "traddr": "10.0.0.2", 00:16:53.354 "trsvcid": "4421", 00:16:53.354 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:53.354 }, 00:16:53.354 "ctrlr_data": { 00:16:53.354 "cntlid": 3, 00:16:53.354 "vendor_id": "0x8086", 00:16:53.354 "model_number": "SPDK bdev Controller", 00:16:53.354 "serial_number": "00000000000000000000", 00:16:53.354 "firmware_revision": "24.05", 00:16:53.354 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:53.354 "oacs": { 00:16:53.354 "security": 0, 00:16:53.354 "format": 0, 00:16:53.354 "firmware": 0, 00:16:53.354 "ns_manage": 0 00:16:53.354 }, 00:16:53.354 "multi_ctrlr": true, 00:16:53.354 "ana_reporting": false 00:16:53.354 }, 00:16:53.354 "vs": { 00:16:53.354 "nvme_version": "1.3" 00:16:53.354 }, 00:16:53.354 "ns_data": { 00:16:53.354 "id": 1, 00:16:53.354 "can_share": true 00:16:53.354 } 00:16:53.354 } 00:16:53.354 ], 00:16:53.354 "mp_policy": "active_passive" 00:16:53.354 } 00:16:53.354 } 00:16:53.354 ] 00:16:53.354 14:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.354 14:21:34 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.354 14:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.354 14:21:34 -- common/autotest_common.sh@10 -- # set +x 00:16:53.354 14:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.354 14:21:34 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.FYDTAbpxWY 00:16:53.354 14:21:34 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:16:53.354 14:21:34 -- host/async_init.sh@78 -- # nvmftestfini 00:16:53.354 14:21:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:53.355 14:21:34 -- nvmf/common.sh@117 -- # sync 00:16:53.355 14:21:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:53.355 14:21:34 -- nvmf/common.sh@120 -- # set +e 00:16:53.355 14:21:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:53.355 14:21:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:53.355 rmmod nvme_tcp 00:16:53.355 rmmod nvme_fabrics 00:16:53.355 rmmod nvme_keyring 00:16:53.355 14:21:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:53.355 14:21:34 -- nvmf/common.sh@124 -- # set -e 00:16:53.355 14:21:34 -- nvmf/common.sh@125 -- # return 0 00:16:53.355 14:21:34 -- nvmf/common.sh@478 -- # '[' -n 3176664 ']' 00:16:53.355 14:21:34 -- nvmf/common.sh@479 -- # killprocess 3176664 00:16:53.355 14:21:34 -- common/autotest_common.sh@936 -- # '[' -z 3176664 ']' 00:16:53.355 14:21:34 -- common/autotest_common.sh@940 -- # kill -0 3176664 00:16:53.355 14:21:34 -- common/autotest_common.sh@941 -- # uname 00:16:53.355 14:21:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:53.355 14:21:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3176664 00:16:53.355 14:21:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:53.355 14:21:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:53.355 14:21:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3176664' 00:16:53.355 killing process with pid 3176664 00:16:53.355 14:21:34 -- common/autotest_common.sh@955 -- # kill 3176664 00:16:53.355 [2024-04-26 14:21:34.907668] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:53.355 [2024-04-26 14:21:34.907714] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:53.355 14:21:34 -- common/autotest_common.sh@960 -- # wait 3176664 00:16:53.658 14:21:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:53.658 14:21:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:53.658 14:21:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:53.658 14:21:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:53.658 14:21:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:53.658 14:21:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.658 14:21:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.658 14:21:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.194 14:21:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:56.194 00:16:56.194 real 0m5.143s 00:16:56.194 user 0m2.008s 00:16:56.194 sys 0m1.563s 00:16:56.194 14:21:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:56.194 14:21:37 -- common/autotest_common.sh@10 -- # set +x 00:16:56.194 ************************************ 00:16:56.194 END TEST nvmf_async_init 00:16:56.194 ************************************ 00:16:56.194 14:21:37 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:56.194 14:21:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:56.194 14:21:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:56.194 14:21:37 -- common/autotest_common.sh@10 -- # set +x 00:16:56.194 ************************************ 00:16:56.194 START TEST dma 00:16:56.194 ************************************ 00:16:56.194 14:21:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:56.194 * Looking for test storage... 00:16:56.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:16:56.194 14:21:37 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.194 14:21:37 -- nvmf/common.sh@7 -- # uname -s 00:16:56.194 14:21:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.195 14:21:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.195 14:21:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.195 14:21:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.195 14:21:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.195 14:21:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.195 14:21:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.195 14:21:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.195 14:21:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.195 14:21:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.195 14:21:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:56.195 14:21:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:16:56.195 14:21:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.195 14:21:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.195 14:21:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.195 14:21:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.195 14:21:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.195 14:21:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.195 14:21:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.195 14:21:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.195 14:21:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.195 14:21:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.195 14:21:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.195 14:21:37 -- paths/export.sh@5 -- # export PATH 00:16:56.195 14:21:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.195 14:21:37 -- nvmf/common.sh@47 -- # : 0 00:16:56.195 14:21:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.195 14:21:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.195 14:21:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.195 14:21:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.195 14:21:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.195 14:21:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.195 14:21:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.195 14:21:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.195 14:21:37 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:16:56.195 14:21:37 -- host/dma.sh@13 -- # exit 0 00:16:56.195 00:16:56.195 real 0m0.075s 00:16:56.195 user 0m0.040s 00:16:56.195 sys 0m0.040s 00:16:56.195 14:21:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:56.195 14:21:37 -- common/autotest_common.sh@10 -- # set +x 00:16:56.195 ************************************ 00:16:56.195 END TEST dma 00:16:56.195 ************************************ 00:16:56.195 14:21:37 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:56.195 14:21:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:56.195 14:21:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:56.195 14:21:37 -- common/autotest_common.sh@10 -- # set +x 00:16:56.195 ************************************ 00:16:56.195 START TEST nvmf_identify 00:16:56.195 ************************************ 00:16:56.195 14:21:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:56.195 * Looking for test storage... 00:16:56.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:16:56.195 14:21:37 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.195 14:21:37 -- nvmf/common.sh@7 -- # uname -s 00:16:56.195 14:21:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.195 14:21:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.195 14:21:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.195 14:21:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.195 14:21:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.195 14:21:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.195 14:21:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.195 14:21:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.195 14:21:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.195 14:21:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.195 14:21:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:56.195 14:21:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:16:56.195 14:21:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.195 14:21:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.195 14:21:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.195 14:21:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.195 14:21:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.195 14:21:37 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.195 14:21:37 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.195 14:21:37 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.195 14:21:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.195 14:21:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.195 14:21:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.195 14:21:37 -- paths/export.sh@5 -- # export PATH 00:16:56.195 14:21:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.195 14:21:37 -- nvmf/common.sh@47 -- # : 0 00:16:56.195 14:21:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.195 14:21:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.195 14:21:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.195 14:21:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.195 14:21:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.195 14:21:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.195 14:21:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.195 14:21:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.195 14:21:37 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:56.195 14:21:37 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:56.195 14:21:37 -- host/identify.sh@14 -- # nvmftestinit 00:16:56.195 14:21:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:56.195 14:21:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.195 14:21:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:56.195 14:21:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:56.195 14:21:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:56.195 14:21:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.195 14:21:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.195 14:21:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.195 14:21:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:56.195 14:21:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:56.195 14:21:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:56.196 14:21:37 -- common/autotest_common.sh@10 -- # set +x 00:16:57.571 14:21:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:57.571 14:21:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:57.571 14:21:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:57.571 14:21:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:57.571 14:21:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:57.572 14:21:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:57.572 14:21:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:57.572 14:21:39 -- nvmf/common.sh@295 -- # net_devs=() 00:16:57.572 14:21:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:57.572 14:21:39 -- nvmf/common.sh@296 -- # e810=() 00:16:57.572 14:21:39 -- nvmf/common.sh@296 -- # local -ga e810 00:16:57.572 14:21:39 -- nvmf/common.sh@297 -- # x722=() 00:16:57.572 14:21:39 -- nvmf/common.sh@297 -- # local -ga x722 00:16:57.572 14:21:39 -- nvmf/common.sh@298 -- # mlx=() 00:16:57.572 14:21:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:57.572 14:21:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:57.572 14:21:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:57.572 14:21:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:57.572 14:21:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:57.572 14:21:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:57.572 14:21:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:57.572 14:21:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:57.572 14:21:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:57.572 14:21:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:57.572 14:21:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:57.572 14:21:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:57.572 14:21:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:57.572 14:21:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:57.572 14:21:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:57.572 14:21:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:57.572 14:21:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:57.572 14:21:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:57.572 14:21:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:57.572 14:21:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:16:57.572 Found 0000:08:00.0 (0x8086 - 0x159b) 00:16:57.572 14:21:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:57.572 14:21:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:57.572 14:21:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.572 14:21:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.572 14:21:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:57.572 14:21:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:57.572 14:21:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:16:57.572 Found 0000:08:00.1 (0x8086 - 0x159b) 00:16:57.572 14:21:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:57.572 14:21:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:57.572 14:21:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:57.572 14:21:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:57.572 14:21:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:57.572 14:21:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:57.572 14:21:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:57.572 14:21:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:57.572 14:21:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:57.572 14:21:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.572 14:21:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:57.572 14:21:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.572 14:21:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:16:57.572 Found net devices under 0000:08:00.0: cvl_0_0 00:16:57.572 14:21:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.572 14:21:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:57.572 14:21:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.572 14:21:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:57.572 14:21:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.572 14:21:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:16:57.572 Found net devices under 0000:08:00.1: cvl_0_1 00:16:57.572 14:21:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.572 14:21:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:57.572 14:21:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:57.572 14:21:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:57.572 14:21:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:57.572 14:21:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:57.572 14:21:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:57.572 14:21:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:57.572 14:21:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:57.572 14:21:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:57.572 14:21:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:57.572 14:21:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:57.572 14:21:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:57.572 14:21:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:57.572 14:21:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:57.572 14:21:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:57.834 14:21:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:57.834 14:21:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:57.834 14:21:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:57.834 14:21:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:57.834 14:21:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:57.834 14:21:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:57.834 14:21:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:57.834 14:21:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:57.834 14:21:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:57.834 14:21:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:57.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:16:57.834 00:16:57.834 --- 10.0.0.2 ping statistics --- 00:16:57.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.834 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:16:57.834 14:21:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:57.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:16:57.834 00:16:57.834 --- 10.0.0.1 ping statistics --- 00:16:57.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.834 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:16:57.834 14:21:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.834 14:21:39 -- nvmf/common.sh@411 -- # return 0 00:16:57.834 14:21:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:57.834 14:21:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.834 14:21:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:57.834 14:21:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:57.834 14:21:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.834 14:21:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:57.834 14:21:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:57.834 14:21:39 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:57.834 14:21:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:57.834 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:16:57.834 14:21:39 -- host/identify.sh@19 -- # nvmfpid=3178448 00:16:57.834 14:21:39 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:57.834 14:21:39 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:57.834 14:21:39 -- host/identify.sh@23 -- # waitforlisten 3178448 00:16:57.834 14:21:39 -- common/autotest_common.sh@817 -- # '[' -z 3178448 ']' 00:16:57.834 14:21:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.834 14:21:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:57.834 14:21:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.834 14:21:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:57.834 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:16:57.834 [2024-04-26 14:21:39.320898] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:16:57.834 [2024-04-26 14:21:39.321003] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.834 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.834 [2024-04-26 14:21:39.388702] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:58.092 [2024-04-26 14:21:39.505367] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.092 [2024-04-26 14:21:39.505427] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.092 [2024-04-26 14:21:39.505443] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.092 [2024-04-26 14:21:39.505456] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.092 [2024-04-26 14:21:39.505467] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.092 [2024-04-26 14:21:39.507654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.092 [2024-04-26 14:21:39.507698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.092 [2024-04-26 14:21:39.507788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:58.092 [2024-04-26 14:21:39.507820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.092 14:21:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:58.092 14:21:39 -- common/autotest_common.sh@850 -- # return 0 00:16:58.092 14:21:39 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:58.092 14:21:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.092 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:16:58.092 [2024-04-26 14:21:39.627127] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.092 14:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.092 14:21:39 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:58.092 14:21:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:58.092 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:16:58.092 14:21:39 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:58.092 14:21:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.092 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:16:58.352 Malloc0 00:16:58.352 14:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.352 14:21:39 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:58.352 14:21:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.352 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:16:58.352 14:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.352 14:21:39 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:58.352 14:21:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.352 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:16:58.352 14:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.352 14:21:39 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:58.352 14:21:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.352 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:16:58.352 [2024-04-26 14:21:39.695939] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.352 14:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.352 14:21:39 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:58.352 14:21:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.352 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:16:58.352 14:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.352 14:21:39 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:58.352 14:21:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.352 14:21:39 -- common/autotest_common.sh@10 -- # set +x 00:16:58.352 [2024-04-26 14:21:39.711711] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:16:58.352 [ 00:16:58.352 { 00:16:58.352 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:58.352 "subtype": "Discovery", 00:16:58.352 "listen_addresses": [ 00:16:58.352 { 00:16:58.352 "transport": "TCP", 00:16:58.352 "trtype": "TCP", 00:16:58.352 "adrfam": "IPv4", 00:16:58.352 "traddr": "10.0.0.2", 00:16:58.352 "trsvcid": "4420" 00:16:58.352 } 00:16:58.352 ], 00:16:58.352 "allow_any_host": true, 00:16:58.352 "hosts": [] 00:16:58.352 }, 00:16:58.352 { 00:16:58.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.352 "subtype": "NVMe", 00:16:58.352 "listen_addresses": [ 00:16:58.352 { 00:16:58.352 "transport": "TCP", 00:16:58.352 "trtype": "TCP", 00:16:58.352 "adrfam": "IPv4", 00:16:58.352 "traddr": "10.0.0.2", 00:16:58.352 "trsvcid": "4420" 00:16:58.352 } 00:16:58.352 ], 00:16:58.352 "allow_any_host": true, 00:16:58.352 "hosts": [], 00:16:58.352 "serial_number": "SPDK00000000000001", 00:16:58.352 "model_number": "SPDK bdev Controller", 00:16:58.352 "max_namespaces": 32, 00:16:58.352 "min_cntlid": 1, 00:16:58.352 "max_cntlid": 65519, 00:16:58.352 "namespaces": [ 00:16:58.352 { 00:16:58.352 "nsid": 1, 00:16:58.352 "bdev_name": "Malloc0", 00:16:58.352 "name": "Malloc0", 00:16:58.352 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:58.352 "eui64": "ABCDEF0123456789", 00:16:58.352 "uuid": "3ebe265d-9cb5-40b7-af12-34e7341ef26e" 00:16:58.352 } 00:16:58.352 ] 00:16:58.352 } 00:16:58.352 ] 00:16:58.352 14:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.352 14:21:39 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:58.352 [2024-04-26 14:21:39.738912] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:16:58.352 [2024-04-26 14:21:39.738962] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3178557 ] 00:16:58.352 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.352 [2024-04-26 14:21:39.781711] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:16:58.352 [2024-04-26 14:21:39.781776] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:58.352 [2024-04-26 14:21:39.781787] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:58.352 [2024-04-26 14:21:39.781804] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:58.352 [2024-04-26 14:21:39.781819] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:58.353 [2024-04-26 14:21:39.782076] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:16:58.353 [2024-04-26 14:21:39.782132] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2097d70 0 00:16:58.353 [2024-04-26 14:21:39.795652] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:58.353 [2024-04-26 14:21:39.795675] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:58.353 [2024-04-26 14:21:39.795685] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:58.353 [2024-04-26 14:21:39.795692] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:58.353 [2024-04-26 14:21:39.795744] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.795757] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.795766] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2097d70) 00:16:58.353 [2024-04-26 14:21:39.795786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:58.353 [2024-04-26 14:21:39.795814] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2100ec0, cid 0, qid 0 00:16:58.353 [2024-04-26 14:21:39.803656] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.353 [2024-04-26 14:21:39.803675] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.353 [2024-04-26 14:21:39.803688] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.803698] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2100ec0) on tqpair=0x2097d70 00:16:58.353 [2024-04-26 14:21:39.803722] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:58.353 [2024-04-26 14:21:39.803735] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:16:58.353 [2024-04-26 14:21:39.803747] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:16:58.353 [2024-04-26 14:21:39.803769] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.803779] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.803787] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2097d70) 00:16:58.353 [2024-04-26 14:21:39.803800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.353 [2024-04-26 14:21:39.803825] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2100ec0, cid 0, qid 0 00:16:58.353 [2024-04-26 14:21:39.803948] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.353 [2024-04-26 14:21:39.803964] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.353 [2024-04-26 14:21:39.803972] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.803980] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2100ec0) on tqpair=0x2097d70 00:16:58.353 [2024-04-26 14:21:39.803992] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:16:58.353 [2024-04-26 14:21:39.804006] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:16:58.353 [2024-04-26 14:21:39.804020] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.804028] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.804036] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2097d70) 00:16:58.353 [2024-04-26 14:21:39.804048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.353 [2024-04-26 14:21:39.804070] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2100ec0, cid 0, qid 0 00:16:58.353 [2024-04-26 14:21:39.804177] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.353 [2024-04-26 14:21:39.804193] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.353 [2024-04-26 14:21:39.804201] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.804209] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2100ec0) on tqpair=0x2097d70 00:16:58.353 [2024-04-26 14:21:39.804221] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:16:58.353 [2024-04-26 14:21:39.804237] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:16:58.353 [2024-04-26 14:21:39.804251] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.804259] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.804267] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2097d70) 00:16:58.353 [2024-04-26 14:21:39.804279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.353 [2024-04-26 14:21:39.804300] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2100ec0, cid 0, qid 0 00:16:58.353 [2024-04-26 14:21:39.804402] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.353 [2024-04-26 14:21:39.804415] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.353 [2024-04-26 14:21:39.804427] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.804436] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2100ec0) on tqpair=0x2097d70 00:16:58.353 [2024-04-26 14:21:39.804448] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:58.353 [2024-04-26 14:21:39.804465] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.804475] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.804483] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2097d70) 00:16:58.353 [2024-04-26 14:21:39.804495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.353 [2024-04-26 14:21:39.804517] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2100ec0, cid 0, qid 0 00:16:58.353 [2024-04-26 14:21:39.804624] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.353 [2024-04-26 14:21:39.808653] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.353 [2024-04-26 14:21:39.808665] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.808673] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2100ec0) on tqpair=0x2097d70 00:16:58.353 [2024-04-26 14:21:39.808685] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:16:58.353 [2024-04-26 14:21:39.808695] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:16:58.353 [2024-04-26 14:21:39.808712] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:58.353 [2024-04-26 14:21:39.808824] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:16:58.353 [2024-04-26 14:21:39.808834] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:58.353 [2024-04-26 14:21:39.808849] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.808858] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.808866] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2097d70) 00:16:58.353 [2024-04-26 14:21:39.808878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.353 [2024-04-26 14:21:39.808902] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2100ec0, cid 0, qid 0 00:16:58.353 [2024-04-26 14:21:39.809016] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.353 [2024-04-26 14:21:39.809032] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.353 [2024-04-26 14:21:39.809040] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.809048] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2100ec0) on tqpair=0x2097d70 00:16:58.353 [2024-04-26 14:21:39.809060] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:58.353 [2024-04-26 14:21:39.809078] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.809087] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.809095] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2097d70) 00:16:58.353 [2024-04-26 14:21:39.809107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.353 [2024-04-26 14:21:39.809129] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2100ec0, cid 0, qid 0 00:16:58.353 [2024-04-26 14:21:39.809231] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.353 [2024-04-26 14:21:39.809247] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.353 [2024-04-26 14:21:39.809255] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.809262] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2100ec0) on tqpair=0x2097d70 00:16:58.353 [2024-04-26 14:21:39.809273] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:58.353 [2024-04-26 14:21:39.809283] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:16:58.353 [2024-04-26 14:21:39.809298] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:16:58.353 [2024-04-26 14:21:39.809318] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:16:58.353 [2024-04-26 14:21:39.809338] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.809348] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2097d70) 00:16:58.353 [2024-04-26 14:21:39.809361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.353 [2024-04-26 14:21:39.809384] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2100ec0, cid 0, qid 0 00:16:58.353 [2024-04-26 14:21:39.809538] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.353 [2024-04-26 14:21:39.809554] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.353 [2024-04-26 14:21:39.809562] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.809570] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2097d70): datao=0, datal=4096, cccid=0 00:16:58.353 [2024-04-26 14:21:39.809579] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2100ec0) on tqpair(0x2097d70): expected_datao=0, payload_size=4096 00:16:58.353 [2024-04-26 14:21:39.809588] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.809607] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.353 [2024-04-26 14:21:39.809618] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.850644] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.354 [2024-04-26 14:21:39.850664] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.354 [2024-04-26 14:21:39.850672] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.850681] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2100ec0) on tqpair=0x2097d70 00:16:58.354 [2024-04-26 14:21:39.850697] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:16:58.354 [2024-04-26 14:21:39.850708] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:16:58.354 [2024-04-26 14:21:39.850717] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:16:58.354 [2024-04-26 14:21:39.850726] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:16:58.354 [2024-04-26 14:21:39.850736] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:16:58.354 [2024-04-26 14:21:39.850745] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:16:58.354 [2024-04-26 14:21:39.850763] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:16:58.354 [2024-04-26 14:21:39.850777] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.850791] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.850799] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2097d70) 00:16:58.354 [2024-04-26 14:21:39.850813] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:58.354 [2024-04-26 14:21:39.850838] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2100ec0, cid 0, qid 0 00:16:58.354 [2024-04-26 14:21:39.850945] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.354 [2024-04-26 14:21:39.850961] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.354 [2024-04-26 14:21:39.850969] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.850977] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2100ec0) on tqpair=0x2097d70 00:16:58.354 [2024-04-26 14:21:39.850991] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.851000] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.851008] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2097d70) 00:16:58.354 [2024-04-26 14:21:39.851019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.354 [2024-04-26 14:21:39.851031] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.851039] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.851046] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2097d70) 00:16:58.354 [2024-04-26 14:21:39.851057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.354 [2024-04-26 14:21:39.851068] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.851076] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.851083] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2097d70) 00:16:58.354 [2024-04-26 14:21:39.851094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.354 [2024-04-26 14:21:39.851105] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.851113] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.851120] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2097d70) 00:16:58.354 [2024-04-26 14:21:39.851131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.354 [2024-04-26 14:21:39.851141] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:16:58.354 [2024-04-26 14:21:39.851162] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:58.354 [2024-04-26 14:21:39.851176] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.851184] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2097d70) 00:16:58.354 [2024-04-26 14:21:39.851197] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.354 [2024-04-26 14:21:39.851221] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2100ec0, cid 0, qid 0 00:16:58.354 [2024-04-26 14:21:39.851233] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2101020, cid 1, qid 0 00:16:58.354 [2024-04-26 14:21:39.851242] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2101180, cid 2, qid 0 00:16:58.354 [2024-04-26 14:21:39.851251] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21012e0, cid 3, qid 0 00:16:58.354 [2024-04-26 14:21:39.851265] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2101440, cid 4, qid 0 00:16:58.354 [2024-04-26 14:21:39.851402] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.354 [2024-04-26 14:21:39.851417] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.354 [2024-04-26 14:21:39.851425] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.851433] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2101440) on tqpair=0x2097d70 00:16:58.354 [2024-04-26 14:21:39.851446] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:16:58.354 [2024-04-26 14:21:39.851456] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:16:58.354 [2024-04-26 14:21:39.851475] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.851485] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2097d70) 00:16:58.354 [2024-04-26 14:21:39.851498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.354 [2024-04-26 14:21:39.851520] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2101440, cid 4, qid 0 00:16:58.354 [2024-04-26 14:21:39.851629] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.354 [2024-04-26 14:21:39.851657] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.354 [2024-04-26 14:21:39.851665] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.851673] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2097d70): datao=0, datal=4096, cccid=4 00:16:58.354 [2024-04-26 14:21:39.851682] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2101440) on tqpair(0x2097d70): expected_datao=0, payload_size=4096 00:16:58.354 [2024-04-26 14:21:39.851691] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.851709] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.851719] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.851751] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.354 [2024-04-26 14:21:39.851764] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.354 [2024-04-26 14:21:39.851771] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.851779] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2101440) on tqpair=0x2097d70 00:16:58.354 [2024-04-26 14:21:39.851801] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:16:58.354 [2024-04-26 14:21:39.851833] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.851844] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2097d70) 00:16:58.354 [2024-04-26 14:21:39.851857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.354 [2024-04-26 14:21:39.851871] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.851879] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.851886] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2097d70) 00:16:58.354 [2024-04-26 14:21:39.851897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.354 [2024-04-26 14:21:39.851927] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2101440, cid 4, qid 0 00:16:58.354 [2024-04-26 14:21:39.851940] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21015a0, cid 5, qid 0 00:16:58.354 [2024-04-26 14:21:39.852092] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.354 [2024-04-26 14:21:39.852108] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.354 [2024-04-26 14:21:39.852120] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.852128] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2097d70): datao=0, datal=1024, cccid=4 00:16:58.354 [2024-04-26 14:21:39.852138] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2101440) on tqpair(0x2097d70): expected_datao=0, payload_size=1024 00:16:58.354 [2024-04-26 14:21:39.852146] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.852158] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.852166] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.852176] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.354 [2024-04-26 14:21:39.852186] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.354 [2024-04-26 14:21:39.852194] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.852202] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21015a0) on tqpair=0x2097d70 00:16:58.354 [2024-04-26 14:21:39.895647] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.354 [2024-04-26 14:21:39.895668] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.354 [2024-04-26 14:21:39.895677] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.895685] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2101440) on tqpair=0x2097d70 00:16:58.354 [2024-04-26 14:21:39.895707] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.354 [2024-04-26 14:21:39.895718] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2097d70) 00:16:58.354 [2024-04-26 14:21:39.895731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.354 [2024-04-26 14:21:39.895767] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2101440, cid 4, qid 0 00:16:58.354 [2024-04-26 14:21:39.895888] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.354 [2024-04-26 14:21:39.895902] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.354 [2024-04-26 14:21:39.895910] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.355 [2024-04-26 14:21:39.895917] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2097d70): datao=0, datal=3072, cccid=4 00:16:58.355 [2024-04-26 14:21:39.895926] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2101440) on tqpair(0x2097d70): expected_datao=0, payload_size=3072 00:16:58.355 [2024-04-26 14:21:39.895935] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.355 [2024-04-26 14:21:39.895947] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.355 [2024-04-26 14:21:39.895955] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.355 [2024-04-26 14:21:39.895968] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.355 [2024-04-26 14:21:39.895980] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.355 [2024-04-26 14:21:39.895987] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.355 [2024-04-26 14:21:39.895995] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2101440) on tqpair=0x2097d70 00:16:58.355 [2024-04-26 14:21:39.896014] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.355 [2024-04-26 14:21:39.896024] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2097d70) 00:16:58.355 [2024-04-26 14:21:39.896036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.355 [2024-04-26 14:21:39.896066] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2101440, cid 4, qid 0 00:16:58.355 [2024-04-26 14:21:39.896183] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.355 [2024-04-26 14:21:39.896196] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.355 [2024-04-26 14:21:39.896204] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.355 [2024-04-26 14:21:39.896216] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2097d70): datao=0, datal=8, cccid=4 00:16:58.355 [2024-04-26 14:21:39.896226] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2101440) on tqpair(0x2097d70): expected_datao=0, payload_size=8 00:16:58.355 [2024-04-26 14:21:39.896235] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.355 [2024-04-26 14:21:39.896246] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.355 [2024-04-26 14:21:39.896254] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.617 [2024-04-26 14:21:39.936730] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.617 [2024-04-26 14:21:39.936757] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.617 [2024-04-26 14:21:39.936766] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.617 [2024-04-26 14:21:39.936774] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2101440) on tqpair=0x2097d70 00:16:58.617 ===================================================== 00:16:58.617 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:58.617 ===================================================== 00:16:58.617 Controller Capabilities/Features 00:16:58.617 ================================ 00:16:58.617 Vendor ID: 0000 00:16:58.617 Subsystem Vendor ID: 0000 00:16:58.617 Serial Number: .................... 00:16:58.617 Model Number: ........................................ 00:16:58.617 Firmware Version: 24.05 00:16:58.617 Recommended Arb Burst: 0 00:16:58.617 IEEE OUI Identifier: 00 00 00 00:16:58.617 Multi-path I/O 00:16:58.617 May have multiple subsystem ports: No 00:16:58.617 May have multiple controllers: No 00:16:58.617 Associated with SR-IOV VF: No 00:16:58.617 Max Data Transfer Size: 131072 00:16:58.617 Max Number of Namespaces: 0 00:16:58.617 Max Number of I/O Queues: 1024 00:16:58.617 NVMe Specification Version (VS): 1.3 00:16:58.617 NVMe Specification Version (Identify): 1.3 00:16:58.617 Maximum Queue Entries: 128 00:16:58.617 Contiguous Queues Required: Yes 00:16:58.617 Arbitration Mechanisms Supported 00:16:58.617 Weighted Round Robin: Not Supported 00:16:58.617 Vendor Specific: Not Supported 00:16:58.617 Reset Timeout: 15000 ms 00:16:58.617 Doorbell Stride: 4 bytes 00:16:58.617 NVM Subsystem Reset: Not Supported 00:16:58.617 Command Sets Supported 00:16:58.617 NVM Command Set: Supported 00:16:58.617 Boot Partition: Not Supported 00:16:58.617 Memory Page Size Minimum: 4096 bytes 00:16:58.617 Memory Page Size Maximum: 4096 bytes 00:16:58.617 Persistent Memory Region: Not Supported 00:16:58.617 Optional Asynchronous Events Supported 00:16:58.617 Namespace Attribute Notices: Not Supported 00:16:58.617 Firmware Activation Notices: Not Supported 00:16:58.617 ANA Change Notices: Not Supported 00:16:58.617 PLE Aggregate Log Change Notices: Not Supported 00:16:58.617 LBA Status Info Alert Notices: Not Supported 00:16:58.617 EGE Aggregate Log Change Notices: Not Supported 00:16:58.617 Normal NVM Subsystem Shutdown event: Not Supported 00:16:58.617 Zone Descriptor Change Notices: Not Supported 00:16:58.617 Discovery Log Change Notices: Supported 00:16:58.617 Controller Attributes 00:16:58.617 128-bit Host Identifier: Not Supported 00:16:58.617 Non-Operational Permissive Mode: Not Supported 00:16:58.617 NVM Sets: Not Supported 00:16:58.617 Read Recovery Levels: Not Supported 00:16:58.617 Endurance Groups: Not Supported 00:16:58.617 Predictable Latency Mode: Not Supported 00:16:58.617 Traffic Based Keep ALive: Not Supported 00:16:58.617 Namespace Granularity: Not Supported 00:16:58.617 SQ Associations: Not Supported 00:16:58.617 UUID List: Not Supported 00:16:58.617 Multi-Domain Subsystem: Not Supported 00:16:58.617 Fixed Capacity Management: Not Supported 00:16:58.617 Variable Capacity Management: Not Supported 00:16:58.617 Delete Endurance Group: Not Supported 00:16:58.617 Delete NVM Set: Not Supported 00:16:58.617 Extended LBA Formats Supported: Not Supported 00:16:58.617 Flexible Data Placement Supported: Not Supported 00:16:58.617 00:16:58.617 Controller Memory Buffer Support 00:16:58.617 ================================ 00:16:58.617 Supported: No 00:16:58.617 00:16:58.617 Persistent Memory Region Support 00:16:58.617 ================================ 00:16:58.617 Supported: No 00:16:58.617 00:16:58.617 Admin Command Set Attributes 00:16:58.617 ============================ 00:16:58.617 Security Send/Receive: Not Supported 00:16:58.617 Format NVM: Not Supported 00:16:58.617 Firmware Activate/Download: Not Supported 00:16:58.617 Namespace Management: Not Supported 00:16:58.617 Device Self-Test: Not Supported 00:16:58.617 Directives: Not Supported 00:16:58.617 NVMe-MI: Not Supported 00:16:58.617 Virtualization Management: Not Supported 00:16:58.617 Doorbell Buffer Config: Not Supported 00:16:58.617 Get LBA Status Capability: Not Supported 00:16:58.617 Command & Feature Lockdown Capability: Not Supported 00:16:58.617 Abort Command Limit: 1 00:16:58.617 Async Event Request Limit: 4 00:16:58.617 Number of Firmware Slots: N/A 00:16:58.617 Firmware Slot 1 Read-Only: N/A 00:16:58.617 Firmware Activation Without Reset: N/A 00:16:58.617 Multiple Update Detection Support: N/A 00:16:58.617 Firmware Update Granularity: No Information Provided 00:16:58.617 Per-Namespace SMART Log: No 00:16:58.617 Asymmetric Namespace Access Log Page: Not Supported 00:16:58.617 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:58.617 Command Effects Log Page: Not Supported 00:16:58.617 Get Log Page Extended Data: Supported 00:16:58.617 Telemetry Log Pages: Not Supported 00:16:58.617 Persistent Event Log Pages: Not Supported 00:16:58.617 Supported Log Pages Log Page: May Support 00:16:58.617 Commands Supported & Effects Log Page: Not Supported 00:16:58.617 Feature Identifiers & Effects Log Page:May Support 00:16:58.617 NVMe-MI Commands & Effects Log Page: May Support 00:16:58.617 Data Area 4 for Telemetry Log: Not Supported 00:16:58.617 Error Log Page Entries Supported: 128 00:16:58.617 Keep Alive: Not Supported 00:16:58.617 00:16:58.617 NVM Command Set Attributes 00:16:58.617 ========================== 00:16:58.617 Submission Queue Entry Size 00:16:58.617 Max: 1 00:16:58.617 Min: 1 00:16:58.617 Completion Queue Entry Size 00:16:58.617 Max: 1 00:16:58.617 Min: 1 00:16:58.617 Number of Namespaces: 0 00:16:58.617 Compare Command: Not Supported 00:16:58.617 Write Uncorrectable Command: Not Supported 00:16:58.617 Dataset Management Command: Not Supported 00:16:58.617 Write Zeroes Command: Not Supported 00:16:58.617 Set Features Save Field: Not Supported 00:16:58.617 Reservations: Not Supported 00:16:58.617 Timestamp: Not Supported 00:16:58.617 Copy: Not Supported 00:16:58.617 Volatile Write Cache: Not Present 00:16:58.617 Atomic Write Unit (Normal): 1 00:16:58.617 Atomic Write Unit (PFail): 1 00:16:58.617 Atomic Compare & Write Unit: 1 00:16:58.617 Fused Compare & Write: Supported 00:16:58.617 Scatter-Gather List 00:16:58.617 SGL Command Set: Supported 00:16:58.617 SGL Keyed: Supported 00:16:58.617 SGL Bit Bucket Descriptor: Not Supported 00:16:58.617 SGL Metadata Pointer: Not Supported 00:16:58.617 Oversized SGL: Not Supported 00:16:58.617 SGL Metadata Address: Not Supported 00:16:58.617 SGL Offset: Supported 00:16:58.617 Transport SGL Data Block: Not Supported 00:16:58.617 Replay Protected Memory Block: Not Supported 00:16:58.617 00:16:58.617 Firmware Slot Information 00:16:58.617 ========================= 00:16:58.617 Active slot: 0 00:16:58.617 00:16:58.617 00:16:58.617 Error Log 00:16:58.617 ========= 00:16:58.617 00:16:58.617 Active Namespaces 00:16:58.617 ================= 00:16:58.617 Discovery Log Page 00:16:58.617 ================== 00:16:58.617 Generation Counter: 2 00:16:58.617 Number of Records: 2 00:16:58.617 Record Format: 0 00:16:58.617 00:16:58.617 Discovery Log Entry 0 00:16:58.617 ---------------------- 00:16:58.617 Transport Type: 3 (TCP) 00:16:58.617 Address Family: 1 (IPv4) 00:16:58.617 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:58.617 Entry Flags: 00:16:58.617 Duplicate Returned Information: 1 00:16:58.617 Explicit Persistent Connection Support for Discovery: 1 00:16:58.617 Transport Requirements: 00:16:58.617 Secure Channel: Not Required 00:16:58.617 Port ID: 0 (0x0000) 00:16:58.617 Controller ID: 65535 (0xffff) 00:16:58.617 Admin Max SQ Size: 128 00:16:58.617 Transport Service Identifier: 4420 00:16:58.617 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:58.617 Transport Address: 10.0.0.2 00:16:58.617 Discovery Log Entry 1 00:16:58.617 ---------------------- 00:16:58.617 Transport Type: 3 (TCP) 00:16:58.617 Address Family: 1 (IPv4) 00:16:58.617 Subsystem Type: 2 (NVM Subsystem) 00:16:58.617 Entry Flags: 00:16:58.617 Duplicate Returned Information: 0 00:16:58.617 Explicit Persistent Connection Support for Discovery: 0 00:16:58.617 Transport Requirements: 00:16:58.617 Secure Channel: Not Required 00:16:58.617 Port ID: 0 (0x0000) 00:16:58.617 Controller ID: 65535 (0xffff) 00:16:58.617 Admin Max SQ Size: 128 00:16:58.618 Transport Service Identifier: 4420 00:16:58.618 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:58.618 Transport Address: 10.0.0.2 [2024-04-26 14:21:39.936899] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:16:58.618 [2024-04-26 14:21:39.936927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.618 [2024-04-26 14:21:39.936941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.618 [2024-04-26 14:21:39.936953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.618 [2024-04-26 14:21:39.936964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.618 [2024-04-26 14:21:39.936979] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.936988] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.936996] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2097d70) 00:16:58.618 [2024-04-26 14:21:39.937009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.618 [2024-04-26 14:21:39.937037] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21012e0, cid 3, qid 0 00:16:58.618 [2024-04-26 14:21:39.937138] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.618 [2024-04-26 14:21:39.937154] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.618 [2024-04-26 14:21:39.937162] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.937170] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21012e0) on tqpair=0x2097d70 00:16:58.618 [2024-04-26 14:21:39.937185] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.937194] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.937201] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2097d70) 00:16:58.618 [2024-04-26 14:21:39.937214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.618 [2024-04-26 14:21:39.937241] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21012e0, cid 3, qid 0 00:16:58.618 [2024-04-26 14:21:39.937373] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.618 [2024-04-26 14:21:39.937389] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.618 [2024-04-26 14:21:39.937397] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.937404] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21012e0) on tqpair=0x2097d70 00:16:58.618 [2024-04-26 14:21:39.937417] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:16:58.618 [2024-04-26 14:21:39.937427] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:16:58.618 [2024-04-26 14:21:39.937450] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.937460] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.937468] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2097d70) 00:16:58.618 [2024-04-26 14:21:39.937480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.618 [2024-04-26 14:21:39.937502] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21012e0, cid 3, qid 0 00:16:58.618 [2024-04-26 14:21:39.937605] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.618 [2024-04-26 14:21:39.937618] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.618 [2024-04-26 14:21:39.937626] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.937643] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21012e0) on tqpair=0x2097d70 00:16:58.618 [2024-04-26 14:21:39.937664] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.937674] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.937682] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2097d70) 00:16:58.618 [2024-04-26 14:21:39.937694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.618 [2024-04-26 14:21:39.937716] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21012e0, cid 3, qid 0 00:16:58.618 [2024-04-26 14:21:39.937813] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.618 [2024-04-26 14:21:39.937826] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.618 [2024-04-26 14:21:39.937834] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.937842] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21012e0) on tqpair=0x2097d70 00:16:58.618 [2024-04-26 14:21:39.937860] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.937871] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.937878] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2097d70) 00:16:58.618 [2024-04-26 14:21:39.937890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.618 [2024-04-26 14:21:39.937912] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21012e0, cid 3, qid 0 00:16:58.618 [2024-04-26 14:21:39.938009] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.618 [2024-04-26 14:21:39.938022] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.618 [2024-04-26 14:21:39.938030] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.938038] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21012e0) on tqpair=0x2097d70 00:16:58.618 [2024-04-26 14:21:39.938056] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.938066] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.938074] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2097d70) 00:16:58.618 [2024-04-26 14:21:39.938086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.618 [2024-04-26 14:21:39.938107] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21012e0, cid 3, qid 0 00:16:58.618 [2024-04-26 14:21:39.938206] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.618 [2024-04-26 14:21:39.938222] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.618 [2024-04-26 14:21:39.938230] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.938238] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21012e0) on tqpair=0x2097d70 00:16:58.618 [2024-04-26 14:21:39.938257] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.938271] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.938280] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2097d70) 00:16:58.618 [2024-04-26 14:21:39.938292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.618 [2024-04-26 14:21:39.938314] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21012e0, cid 3, qid 0 00:16:58.618 [2024-04-26 14:21:39.938412] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.618 [2024-04-26 14:21:39.938428] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.618 [2024-04-26 14:21:39.938436] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.938443] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21012e0) on tqpair=0x2097d70 00:16:58.618 [2024-04-26 14:21:39.938463] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.938473] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.938481] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2097d70) 00:16:58.618 [2024-04-26 14:21:39.938493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.618 [2024-04-26 14:21:39.938514] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21012e0, cid 3, qid 0 00:16:58.618 [2024-04-26 14:21:39.938617] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.618 [2024-04-26 14:21:39.938640] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.618 [2024-04-26 14:21:39.938649] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.938658] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21012e0) on tqpair=0x2097d70 00:16:58.618 [2024-04-26 14:21:39.938677] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.938688] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.938695] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2097d70) 00:16:58.618 [2024-04-26 14:21:39.938707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.618 [2024-04-26 14:21:39.938729] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21012e0, cid 3, qid 0 00:16:58.618 [2024-04-26 14:21:39.938833] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.618 [2024-04-26 14:21:39.938846] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.618 [2024-04-26 14:21:39.938854] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.938861] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21012e0) on tqpair=0x2097d70 00:16:58.618 [2024-04-26 14:21:39.938880] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.938890] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.938898] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2097d70) 00:16:58.618 [2024-04-26 14:21:39.938910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.618 [2024-04-26 14:21:39.938931] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21012e0, cid 3, qid 0 00:16:58.618 [2024-04-26 14:21:39.939033] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.618 [2024-04-26 14:21:39.939048] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.618 [2024-04-26 14:21:39.939056] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.939064] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21012e0) on tqpair=0x2097d70 00:16:58.618 [2024-04-26 14:21:39.939083] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.939093] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.939105] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2097d70) 00:16:58.618 [2024-04-26 14:21:39.939118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.618 [2024-04-26 14:21:39.939140] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21012e0, cid 3, qid 0 00:16:58.618 [2024-04-26 14:21:39.939241] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.618 [2024-04-26 14:21:39.939257] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.618 [2024-04-26 14:21:39.939265] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.939273] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21012e0) on tqpair=0x2097d70 00:16:58.618 [2024-04-26 14:21:39.939292] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.939302] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.939310] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2097d70) 00:16:58.618 [2024-04-26 14:21:39.939322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.618 [2024-04-26 14:21:39.939343] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21012e0, cid 3, qid 0 00:16:58.618 [2024-04-26 14:21:39.939447] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.618 [2024-04-26 14:21:39.939462] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.618 [2024-04-26 14:21:39.939470] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.939478] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21012e0) on tqpair=0x2097d70 00:16:58.618 [2024-04-26 14:21:39.939497] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.939507] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.618 [2024-04-26 14:21:39.939515] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2097d70) 00:16:58.618 [2024-04-26 14:21:39.939527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.619 [2024-04-26 14:21:39.939548] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21012e0, cid 3, qid 0 00:16:58.619 [2024-04-26 14:21:39.939654] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.619 [2024-04-26 14:21:39.939670] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.619 [2024-04-26 14:21:39.939678] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:39.939686] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21012e0) on tqpair=0x2097d70 00:16:58.619 [2024-04-26 14:21:39.939706] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:39.939716] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:39.939723] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2097d70) 00:16:58.619 [2024-04-26 14:21:39.939736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.619 [2024-04-26 14:21:39.939758] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21012e0, cid 3, qid 0 00:16:58.619 [2024-04-26 14:21:39.939855] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.619 [2024-04-26 14:21:39.939870] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.619 [2024-04-26 14:21:39.939878] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:39.939886] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21012e0) on tqpair=0x2097d70 00:16:58.619 [2024-04-26 14:21:39.939905] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:39.939915] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:39.939923] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2097d70) 00:16:58.619 [2024-04-26 14:21:39.939939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.619 [2024-04-26 14:21:39.939962] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21012e0, cid 3, qid 0 00:16:58.619 [2024-04-26 14:21:39.940062] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.619 [2024-04-26 14:21:39.940075] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.619 [2024-04-26 14:21:39.940083] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:39.940091] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21012e0) on tqpair=0x2097d70 00:16:58.619 [2024-04-26 14:21:39.940109] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:39.940119] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:39.940127] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2097d70) 00:16:58.619 [2024-04-26 14:21:39.940139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.619 [2024-04-26 14:21:39.940160] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21012e0, cid 3, qid 0 00:16:58.619 [2024-04-26 14:21:39.940258] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.619 [2024-04-26 14:21:39.940273] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.619 [2024-04-26 14:21:39.940281] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:39.940289] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21012e0) on tqpair=0x2097d70 00:16:58.619 [2024-04-26 14:21:39.940308] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:39.940318] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:39.940326] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2097d70) 00:16:58.619 [2024-04-26 14:21:39.940338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.619 [2024-04-26 14:21:39.940359] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21012e0, cid 3, qid 0 00:16:58.619 [2024-04-26 14:21:39.940467] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.619 [2024-04-26 14:21:39.940482] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.619 [2024-04-26 14:21:39.940490] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:39.940498] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21012e0) on tqpair=0x2097d70 00:16:58.619 [2024-04-26 14:21:39.940517] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:39.940527] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:39.940535] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2097d70) 00:16:58.619 [2024-04-26 14:21:39.940547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.619 [2024-04-26 14:21:39.940569] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21012e0, cid 3, qid 0 00:16:58.619 [2024-04-26 14:21:39.944654] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.619 [2024-04-26 14:21:39.944671] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.619 [2024-04-26 14:21:39.944680] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:39.944688] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21012e0) on tqpair=0x2097d70 00:16:58.619 [2024-04-26 14:21:39.944709] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:39.944720] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:39.944727] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2097d70) 00:16:58.619 [2024-04-26 14:21:39.944744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.619 [2024-04-26 14:21:39.944768] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21012e0, cid 3, qid 0 00:16:58.619 [2024-04-26 14:21:39.944872] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.619 [2024-04-26 14:21:39.944888] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.619 [2024-04-26 14:21:39.944896] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:39.944904] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x21012e0) on tqpair=0x2097d70 00:16:58.619 [2024-04-26 14:21:39.944920] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:16:58.619 00:16:58.619 14:21:39 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:58.619 [2024-04-26 14:21:39.981369] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:16:58.619 [2024-04-26 14:21:39.981416] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3178565 ] 00:16:58.619 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.619 [2024-04-26 14:21:40.022135] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:16:58.619 [2024-04-26 14:21:40.022201] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:58.619 [2024-04-26 14:21:40.022212] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:58.619 [2024-04-26 14:21:40.022230] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:58.619 [2024-04-26 14:21:40.022243] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:58.619 [2024-04-26 14:21:40.022477] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:16:58.619 [2024-04-26 14:21:40.022527] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a1fd70 0 00:16:58.619 [2024-04-26 14:21:40.036648] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:58.619 [2024-04-26 14:21:40.036670] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:58.619 [2024-04-26 14:21:40.036680] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:58.619 [2024-04-26 14:21:40.036688] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:58.619 [2024-04-26 14:21:40.036733] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:40.036746] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:40.036755] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a1fd70) 00:16:58.619 [2024-04-26 14:21:40.036774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:58.619 [2024-04-26 14:21:40.036802] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a88ec0, cid 0, qid 0 00:16:58.619 [2024-04-26 14:21:40.042645] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.619 [2024-04-26 14:21:40.042665] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.619 [2024-04-26 14:21:40.042674] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:40.042682] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a88ec0) on tqpair=0x1a1fd70 00:16:58.619 [2024-04-26 14:21:40.042700] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:58.619 [2024-04-26 14:21:40.042717] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:16:58.619 [2024-04-26 14:21:40.042728] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:16:58.619 [2024-04-26 14:21:40.042749] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:40.042759] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:40.042767] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a1fd70) 00:16:58.619 [2024-04-26 14:21:40.042780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.619 [2024-04-26 14:21:40.042806] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a88ec0, cid 0, qid 0 00:16:58.619 [2024-04-26 14:21:40.042948] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.619 [2024-04-26 14:21:40.042964] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.619 [2024-04-26 14:21:40.042972] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:40.042980] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a88ec0) on tqpair=0x1a1fd70 00:16:58.619 [2024-04-26 14:21:40.042991] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:16:58.619 [2024-04-26 14:21:40.043007] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:16:58.619 [2024-04-26 14:21:40.043021] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:40.043029] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:40.043037] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a1fd70) 00:16:58.619 [2024-04-26 14:21:40.043049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.619 [2024-04-26 14:21:40.043072] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a88ec0, cid 0, qid 0 00:16:58.619 [2024-04-26 14:21:40.043194] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.619 [2024-04-26 14:21:40.043214] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.619 [2024-04-26 14:21:40.043222] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:40.043230] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a88ec0) on tqpair=0x1a1fd70 00:16:58.619 [2024-04-26 14:21:40.043241] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:16:58.619 [2024-04-26 14:21:40.043256] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:16:58.619 [2024-04-26 14:21:40.043270] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:40.043278] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:40.043286] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a1fd70) 00:16:58.619 [2024-04-26 14:21:40.043297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.619 [2024-04-26 14:21:40.043319] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a88ec0, cid 0, qid 0 00:16:58.619 [2024-04-26 14:21:40.043445] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.619 [2024-04-26 14:21:40.043458] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.619 [2024-04-26 14:21:40.043466] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.619 [2024-04-26 14:21:40.043474] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a88ec0) on tqpair=0x1a1fd70 00:16:58.620 [2024-04-26 14:21:40.043486] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:58.620 [2024-04-26 14:21:40.043508] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.043519] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.043527] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a1fd70) 00:16:58.620 [2024-04-26 14:21:40.043539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.620 [2024-04-26 14:21:40.043561] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a88ec0, cid 0, qid 0 00:16:58.620 [2024-04-26 14:21:40.047648] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.620 [2024-04-26 14:21:40.047666] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.620 [2024-04-26 14:21:40.047675] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.047683] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a88ec0) on tqpair=0x1a1fd70 00:16:58.620 [2024-04-26 14:21:40.047694] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:16:58.620 [2024-04-26 14:21:40.047704] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:16:58.620 [2024-04-26 14:21:40.047719] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:58.620 [2024-04-26 14:21:40.047830] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:16:58.620 [2024-04-26 14:21:40.047839] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:58.620 [2024-04-26 14:21:40.047854] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.047863] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.047870] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a1fd70) 00:16:58.620 [2024-04-26 14:21:40.047883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.620 [2024-04-26 14:21:40.047907] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a88ec0, cid 0, qid 0 00:16:58.620 [2024-04-26 14:21:40.048043] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.620 [2024-04-26 14:21:40.048057] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.620 [2024-04-26 14:21:40.048065] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.048073] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a88ec0) on tqpair=0x1a1fd70 00:16:58.620 [2024-04-26 14:21:40.048084] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:58.620 [2024-04-26 14:21:40.048102] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.048112] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.048119] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a1fd70) 00:16:58.620 [2024-04-26 14:21:40.048131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.620 [2024-04-26 14:21:40.048153] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a88ec0, cid 0, qid 0 00:16:58.620 [2024-04-26 14:21:40.048281] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.620 [2024-04-26 14:21:40.048296] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.620 [2024-04-26 14:21:40.048304] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.048312] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a88ec0) on tqpair=0x1a1fd70 00:16:58.620 [2024-04-26 14:21:40.048323] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:58.620 [2024-04-26 14:21:40.048337] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:16:58.620 [2024-04-26 14:21:40.048354] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:16:58.620 [2024-04-26 14:21:40.048373] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:16:58.620 [2024-04-26 14:21:40.048392] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.048402] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a1fd70) 00:16:58.620 [2024-04-26 14:21:40.048415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.620 [2024-04-26 14:21:40.048438] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a88ec0, cid 0, qid 0 00:16:58.620 [2024-04-26 14:21:40.048624] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.620 [2024-04-26 14:21:40.048651] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.620 [2024-04-26 14:21:40.048660] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.048668] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a1fd70): datao=0, datal=4096, cccid=0 00:16:58.620 [2024-04-26 14:21:40.048677] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a88ec0) on tqpair(0x1a1fd70): expected_datao=0, payload_size=4096 00:16:58.620 [2024-04-26 14:21:40.048687] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.048707] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.048717] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.089770] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.620 [2024-04-26 14:21:40.089801] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.620 [2024-04-26 14:21:40.089811] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.089820] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a88ec0) on tqpair=0x1a1fd70 00:16:58.620 [2024-04-26 14:21:40.089840] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:16:58.620 [2024-04-26 14:21:40.089851] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:16:58.620 [2024-04-26 14:21:40.089860] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:16:58.620 [2024-04-26 14:21:40.089869] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:16:58.620 [2024-04-26 14:21:40.089878] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:16:58.620 [2024-04-26 14:21:40.089887] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:16:58.620 [2024-04-26 14:21:40.089906] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:16:58.620 [2024-04-26 14:21:40.089922] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.089931] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.089939] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a1fd70) 00:16:58.620 [2024-04-26 14:21:40.089955] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:58.620 [2024-04-26 14:21:40.089981] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a88ec0, cid 0, qid 0 00:16:58.620 [2024-04-26 14:21:40.090128] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.620 [2024-04-26 14:21:40.090150] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.620 [2024-04-26 14:21:40.090159] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.090167] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a88ec0) on tqpair=0x1a1fd70 00:16:58.620 [2024-04-26 14:21:40.090182] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.090191] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.090199] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a1fd70) 00:16:58.620 [2024-04-26 14:21:40.090211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.620 [2024-04-26 14:21:40.090223] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.090231] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.090239] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a1fd70) 00:16:58.620 [2024-04-26 14:21:40.090249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.620 [2024-04-26 14:21:40.090260] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.090269] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.090277] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a1fd70) 00:16:58.620 [2024-04-26 14:21:40.090287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.620 [2024-04-26 14:21:40.090299] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.090307] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.090315] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a1fd70) 00:16:58.620 [2024-04-26 14:21:40.090325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.620 [2024-04-26 14:21:40.090335] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:58.620 [2024-04-26 14:21:40.090359] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:58.620 [2024-04-26 14:21:40.090374] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.090383] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a1fd70) 00:16:58.620 [2024-04-26 14:21:40.090395] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.620 [2024-04-26 14:21:40.090420] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a88ec0, cid 0, qid 0 00:16:58.620 [2024-04-26 14:21:40.090432] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a89020, cid 1, qid 0 00:16:58.620 [2024-04-26 14:21:40.090442] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a89180, cid 2, qid 0 00:16:58.620 [2024-04-26 14:21:40.090451] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a892e0, cid 3, qid 0 00:16:58.620 [2024-04-26 14:21:40.090460] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a89440, cid 4, qid 0 00:16:58.620 [2024-04-26 14:21:40.090638] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.620 [2024-04-26 14:21:40.090654] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.620 [2024-04-26 14:21:40.090663] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.090671] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a89440) on tqpair=0x1a1fd70 00:16:58.620 [2024-04-26 14:21:40.090683] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:16:58.620 [2024-04-26 14:21:40.090698] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:58.620 [2024-04-26 14:21:40.090719] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:16:58.620 [2024-04-26 14:21:40.090732] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:58.620 [2024-04-26 14:21:40.090746] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.090754] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.090762] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a1fd70) 00:16:58.620 [2024-04-26 14:21:40.090774] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:58.620 [2024-04-26 14:21:40.090797] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a89440, cid 4, qid 0 00:16:58.620 [2024-04-26 14:21:40.090940] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.620 [2024-04-26 14:21:40.090954] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.620 [2024-04-26 14:21:40.090962] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.620 [2024-04-26 14:21:40.090969] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a89440) on tqpair=0x1a1fd70 00:16:58.620 [2024-04-26 14:21:40.091037] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:16:58.621 [2024-04-26 14:21:40.091060] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:58.621 [2024-04-26 14:21:40.091077] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.091086] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a1fd70) 00:16:58.621 [2024-04-26 14:21:40.091098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.621 [2024-04-26 14:21:40.091121] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a89440, cid 4, qid 0 00:16:58.621 [2024-04-26 14:21:40.091280] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.621 [2024-04-26 14:21:40.091296] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.621 [2024-04-26 14:21:40.091304] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.091313] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a1fd70): datao=0, datal=4096, cccid=4 00:16:58.621 [2024-04-26 14:21:40.091322] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a89440) on tqpair(0x1a1fd70): expected_datao=0, payload_size=4096 00:16:58.621 [2024-04-26 14:21:40.091331] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.091344] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.091353] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.091367] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.621 [2024-04-26 14:21:40.091378] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.621 [2024-04-26 14:21:40.091386] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.091394] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a89440) on tqpair=0x1a1fd70 00:16:58.621 [2024-04-26 14:21:40.091412] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:16:58.621 [2024-04-26 14:21:40.091431] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:16:58.621 [2024-04-26 14:21:40.091452] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:16:58.621 [2024-04-26 14:21:40.091471] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.091481] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a1fd70) 00:16:58.621 [2024-04-26 14:21:40.091493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.621 [2024-04-26 14:21:40.091516] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a89440, cid 4, qid 0 00:16:58.621 [2024-04-26 14:21:40.095659] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.621 [2024-04-26 14:21:40.095678] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.621 [2024-04-26 14:21:40.095686] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.095694] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a1fd70): datao=0, datal=4096, cccid=4 00:16:58.621 [2024-04-26 14:21:40.095704] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a89440) on tqpair(0x1a1fd70): expected_datao=0, payload_size=4096 00:16:58.621 [2024-04-26 14:21:40.095713] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.095725] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.095733] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.095744] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.621 [2024-04-26 14:21:40.095754] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.621 [2024-04-26 14:21:40.095762] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.095770] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a89440) on tqpair=0x1a1fd70 00:16:58.621 [2024-04-26 14:21:40.095796] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:58.621 [2024-04-26 14:21:40.095818] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:58.621 [2024-04-26 14:21:40.095834] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.095843] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a1fd70) 00:16:58.621 [2024-04-26 14:21:40.095855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.621 [2024-04-26 14:21:40.095879] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a89440, cid 4, qid 0 00:16:58.621 [2024-04-26 14:21:40.096000] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.621 [2024-04-26 14:21:40.096018] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.621 [2024-04-26 14:21:40.096026] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.096034] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a1fd70): datao=0, datal=4096, cccid=4 00:16:58.621 [2024-04-26 14:21:40.096043] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a89440) on tqpair(0x1a1fd70): expected_datao=0, payload_size=4096 00:16:58.621 [2024-04-26 14:21:40.096052] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.096064] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.096072] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.096086] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.621 [2024-04-26 14:21:40.096097] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.621 [2024-04-26 14:21:40.096105] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.096113] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a89440) on tqpair=0x1a1fd70 00:16:58.621 [2024-04-26 14:21:40.096136] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:58.621 [2024-04-26 14:21:40.096154] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:16:58.621 [2024-04-26 14:21:40.096172] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:16:58.621 [2024-04-26 14:21:40.096185] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:58.621 [2024-04-26 14:21:40.096195] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:16:58.621 [2024-04-26 14:21:40.096205] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:16:58.621 [2024-04-26 14:21:40.096214] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:16:58.621 [2024-04-26 14:21:40.096225] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:16:58.621 [2024-04-26 14:21:40.096248] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.096258] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a1fd70) 00:16:58.621 [2024-04-26 14:21:40.096271] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.621 [2024-04-26 14:21:40.096291] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.096300] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.096307] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a1fd70) 00:16:58.621 [2024-04-26 14:21:40.096318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.621 [2024-04-26 14:21:40.096346] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a89440, cid 4, qid 0 00:16:58.621 [2024-04-26 14:21:40.096359] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a895a0, cid 5, qid 0 00:16:58.621 [2024-04-26 14:21:40.096472] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.621 [2024-04-26 14:21:40.096486] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.621 [2024-04-26 14:21:40.096494] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.096502] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a89440) on tqpair=0x1a1fd70 00:16:58.621 [2024-04-26 14:21:40.096518] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.621 [2024-04-26 14:21:40.096529] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.621 [2024-04-26 14:21:40.096536] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.096544] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a895a0) on tqpair=0x1a1fd70 00:16:58.621 [2024-04-26 14:21:40.096562] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.096573] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a1fd70) 00:16:58.621 [2024-04-26 14:21:40.096585] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.621 [2024-04-26 14:21:40.096607] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a895a0, cid 5, qid 0 00:16:58.621 [2024-04-26 14:21:40.096718] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.621 [2024-04-26 14:21:40.096735] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.621 [2024-04-26 14:21:40.096743] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.096751] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a895a0) on tqpair=0x1a1fd70 00:16:58.621 [2024-04-26 14:21:40.096774] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.096786] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a1fd70) 00:16:58.621 [2024-04-26 14:21:40.096798] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.621 [2024-04-26 14:21:40.096820] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a895a0, cid 5, qid 0 00:16:58.621 [2024-04-26 14:21:40.096923] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.621 [2024-04-26 14:21:40.096939] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.621 [2024-04-26 14:21:40.096947] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.096955] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a895a0) on tqpair=0x1a1fd70 00:16:58.621 [2024-04-26 14:21:40.096974] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.621 [2024-04-26 14:21:40.096984] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a1fd70) 00:16:58.621 [2024-04-26 14:21:40.096996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.622 [2024-04-26 14:21:40.097018] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a895a0, cid 5, qid 0 00:16:58.622 [2024-04-26 14:21:40.097115] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.622 [2024-04-26 14:21:40.097129] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.622 [2024-04-26 14:21:40.097137] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097145] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a895a0) on tqpair=0x1a1fd70 00:16:58.622 [2024-04-26 14:21:40.097170] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097181] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a1fd70) 00:16:58.622 [2024-04-26 14:21:40.097194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.622 [2024-04-26 14:21:40.097208] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097217] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a1fd70) 00:16:58.622 [2024-04-26 14:21:40.097229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.622 [2024-04-26 14:21:40.097243] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097252] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1a1fd70) 00:16:58.622 [2024-04-26 14:21:40.097263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.622 [2024-04-26 14:21:40.097279] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097288] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a1fd70) 00:16:58.622 [2024-04-26 14:21:40.097299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.622 [2024-04-26 14:21:40.097323] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a895a0, cid 5, qid 0 00:16:58.622 [2024-04-26 14:21:40.097335] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a89440, cid 4, qid 0 00:16:58.622 [2024-04-26 14:21:40.097344] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a89700, cid 6, qid 0 00:16:58.622 [2024-04-26 14:21:40.097353] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a89860, cid 7, qid 0 00:16:58.622 [2024-04-26 14:21:40.097546] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.622 [2024-04-26 14:21:40.097562] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.622 [2024-04-26 14:21:40.097570] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097577] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a1fd70): datao=0, datal=8192, cccid=5 00:16:58.622 [2024-04-26 14:21:40.097587] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a895a0) on tqpair(0x1a1fd70): expected_datao=0, payload_size=8192 00:16:58.622 [2024-04-26 14:21:40.097595] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097638] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097652] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097663] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.622 [2024-04-26 14:21:40.097673] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.622 [2024-04-26 14:21:40.097681] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097689] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a1fd70): datao=0, datal=512, cccid=4 00:16:58.622 [2024-04-26 14:21:40.097698] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a89440) on tqpair(0x1a1fd70): expected_datao=0, payload_size=512 00:16:58.622 [2024-04-26 14:21:40.097707] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097718] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097726] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097736] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.622 [2024-04-26 14:21:40.097746] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.622 [2024-04-26 14:21:40.097754] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097761] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a1fd70): datao=0, datal=512, cccid=6 00:16:58.622 [2024-04-26 14:21:40.097770] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a89700) on tqpair(0x1a1fd70): expected_datao=0, payload_size=512 00:16:58.622 [2024-04-26 14:21:40.097779] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097790] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097798] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097808] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:58.622 [2024-04-26 14:21:40.097818] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:58.622 [2024-04-26 14:21:40.097826] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097833] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a1fd70): datao=0, datal=4096, cccid=7 00:16:58.622 [2024-04-26 14:21:40.097842] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a89860) on tqpair(0x1a1fd70): expected_datao=0, payload_size=4096 00:16:58.622 [2024-04-26 14:21:40.097851] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097863] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097871] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097885] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.622 [2024-04-26 14:21:40.097896] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.622 [2024-04-26 14:21:40.097903] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097911] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a895a0) on tqpair=0x1a1fd70 00:16:58.622 [2024-04-26 14:21:40.097939] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.622 [2024-04-26 14:21:40.097951] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.622 [2024-04-26 14:21:40.097959] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.097970] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a89440) on tqpair=0x1a1fd70 00:16:58.622 [2024-04-26 14:21:40.097988] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.622 [2024-04-26 14:21:40.098000] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.622 [2024-04-26 14:21:40.098008] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.098016] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a89700) on tqpair=0x1a1fd70 00:16:58.622 [2024-04-26 14:21:40.098030] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.622 [2024-04-26 14:21:40.098041] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.622 [2024-04-26 14:21:40.098048] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.622 [2024-04-26 14:21:40.098056] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a89860) on tqpair=0x1a1fd70 00:16:58.622 ===================================================== 00:16:58.622 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:58.622 ===================================================== 00:16:58.622 Controller Capabilities/Features 00:16:58.622 ================================ 00:16:58.622 Vendor ID: 8086 00:16:58.622 Subsystem Vendor ID: 8086 00:16:58.622 Serial Number: SPDK00000000000001 00:16:58.622 Model Number: SPDK bdev Controller 00:16:58.622 Firmware Version: 24.05 00:16:58.622 Recommended Arb Burst: 6 00:16:58.622 IEEE OUI Identifier: e4 d2 5c 00:16:58.622 Multi-path I/O 00:16:58.622 May have multiple subsystem ports: Yes 00:16:58.622 May have multiple controllers: Yes 00:16:58.622 Associated with SR-IOV VF: No 00:16:58.622 Max Data Transfer Size: 131072 00:16:58.622 Max Number of Namespaces: 32 00:16:58.622 Max Number of I/O Queues: 127 00:16:58.622 NVMe Specification Version (VS): 1.3 00:16:58.622 NVMe Specification Version (Identify): 1.3 00:16:58.622 Maximum Queue Entries: 128 00:16:58.622 Contiguous Queues Required: Yes 00:16:58.622 Arbitration Mechanisms Supported 00:16:58.622 Weighted Round Robin: Not Supported 00:16:58.622 Vendor Specific: Not Supported 00:16:58.622 Reset Timeout: 15000 ms 00:16:58.622 Doorbell Stride: 4 bytes 00:16:58.622 NVM Subsystem Reset: Not Supported 00:16:58.622 Command Sets Supported 00:16:58.622 NVM Command Set: Supported 00:16:58.622 Boot Partition: Not Supported 00:16:58.622 Memory Page Size Minimum: 4096 bytes 00:16:58.622 Memory Page Size Maximum: 4096 bytes 00:16:58.622 Persistent Memory Region: Not Supported 00:16:58.622 Optional Asynchronous Events Supported 00:16:58.622 Namespace Attribute Notices: Supported 00:16:58.622 Firmware Activation Notices: Not Supported 00:16:58.622 ANA Change Notices: Not Supported 00:16:58.622 PLE Aggregate Log Change Notices: Not Supported 00:16:58.622 LBA Status Info Alert Notices: Not Supported 00:16:58.622 EGE Aggregate Log Change Notices: Not Supported 00:16:58.622 Normal NVM Subsystem Shutdown event: Not Supported 00:16:58.622 Zone Descriptor Change Notices: Not Supported 00:16:58.622 Discovery Log Change Notices: Not Supported 00:16:58.622 Controller Attributes 00:16:58.622 128-bit Host Identifier: Supported 00:16:58.622 Non-Operational Permissive Mode: Not Supported 00:16:58.622 NVM Sets: Not Supported 00:16:58.622 Read Recovery Levels: Not Supported 00:16:58.622 Endurance Groups: Not Supported 00:16:58.622 Predictable Latency Mode: Not Supported 00:16:58.622 Traffic Based Keep ALive: Not Supported 00:16:58.622 Namespace Granularity: Not Supported 00:16:58.622 SQ Associations: Not Supported 00:16:58.622 UUID List: Not Supported 00:16:58.622 Multi-Domain Subsystem: Not Supported 00:16:58.622 Fixed Capacity Management: Not Supported 00:16:58.622 Variable Capacity Management: Not Supported 00:16:58.622 Delete Endurance Group: Not Supported 00:16:58.622 Delete NVM Set: Not Supported 00:16:58.622 Extended LBA Formats Supported: Not Supported 00:16:58.622 Flexible Data Placement Supported: Not Supported 00:16:58.622 00:16:58.622 Controller Memory Buffer Support 00:16:58.622 ================================ 00:16:58.622 Supported: No 00:16:58.622 00:16:58.622 Persistent Memory Region Support 00:16:58.622 ================================ 00:16:58.622 Supported: No 00:16:58.622 00:16:58.622 Admin Command Set Attributes 00:16:58.622 ============================ 00:16:58.622 Security Send/Receive: Not Supported 00:16:58.622 Format NVM: Not Supported 00:16:58.622 Firmware Activate/Download: Not Supported 00:16:58.622 Namespace Management: Not Supported 00:16:58.622 Device Self-Test: Not Supported 00:16:58.622 Directives: Not Supported 00:16:58.622 NVMe-MI: Not Supported 00:16:58.622 Virtualization Management: Not Supported 00:16:58.622 Doorbell Buffer Config: Not Supported 00:16:58.622 Get LBA Status Capability: Not Supported 00:16:58.622 Command & Feature Lockdown Capability: Not Supported 00:16:58.622 Abort Command Limit: 4 00:16:58.622 Async Event Request Limit: 4 00:16:58.622 Number of Firmware Slots: N/A 00:16:58.622 Firmware Slot 1 Read-Only: N/A 00:16:58.622 Firmware Activation Without Reset: N/A 00:16:58.622 Multiple Update Detection Support: N/A 00:16:58.622 Firmware Update Granularity: No Information Provided 00:16:58.622 Per-Namespace SMART Log: No 00:16:58.622 Asymmetric Namespace Access Log Page: Not Supported 00:16:58.622 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:58.622 Command Effects Log Page: Supported 00:16:58.622 Get Log Page Extended Data: Supported 00:16:58.622 Telemetry Log Pages: Not Supported 00:16:58.622 Persistent Event Log Pages: Not Supported 00:16:58.622 Supported Log Pages Log Page: May Support 00:16:58.623 Commands Supported & Effects Log Page: Not Supported 00:16:58.623 Feature Identifiers & Effects Log Page:May Support 00:16:58.623 NVMe-MI Commands & Effects Log Page: May Support 00:16:58.623 Data Area 4 for Telemetry Log: Not Supported 00:16:58.623 Error Log Page Entries Supported: 128 00:16:58.623 Keep Alive: Supported 00:16:58.623 Keep Alive Granularity: 10000 ms 00:16:58.623 00:16:58.623 NVM Command Set Attributes 00:16:58.623 ========================== 00:16:58.623 Submission Queue Entry Size 00:16:58.623 Max: 64 00:16:58.623 Min: 64 00:16:58.623 Completion Queue Entry Size 00:16:58.623 Max: 16 00:16:58.623 Min: 16 00:16:58.623 Number of Namespaces: 32 00:16:58.623 Compare Command: Supported 00:16:58.623 Write Uncorrectable Command: Not Supported 00:16:58.623 Dataset Management Command: Supported 00:16:58.623 Write Zeroes Command: Supported 00:16:58.623 Set Features Save Field: Not Supported 00:16:58.623 Reservations: Supported 00:16:58.623 Timestamp: Not Supported 00:16:58.623 Copy: Supported 00:16:58.623 Volatile Write Cache: Present 00:16:58.623 Atomic Write Unit (Normal): 1 00:16:58.623 Atomic Write Unit (PFail): 1 00:16:58.623 Atomic Compare & Write Unit: 1 00:16:58.623 Fused Compare & Write: Supported 00:16:58.623 Scatter-Gather List 00:16:58.623 SGL Command Set: Supported 00:16:58.623 SGL Keyed: Supported 00:16:58.623 SGL Bit Bucket Descriptor: Not Supported 00:16:58.623 SGL Metadata Pointer: Not Supported 00:16:58.623 Oversized SGL: Not Supported 00:16:58.623 SGL Metadata Address: Not Supported 00:16:58.623 SGL Offset: Supported 00:16:58.623 Transport SGL Data Block: Not Supported 00:16:58.623 Replay Protected Memory Block: Not Supported 00:16:58.623 00:16:58.623 Firmware Slot Information 00:16:58.623 ========================= 00:16:58.623 Active slot: 1 00:16:58.623 Slot 1 Firmware Revision: 24.05 00:16:58.623 00:16:58.623 00:16:58.623 Commands Supported and Effects 00:16:58.623 ============================== 00:16:58.623 Admin Commands 00:16:58.623 -------------- 00:16:58.623 Get Log Page (02h): Supported 00:16:58.623 Identify (06h): Supported 00:16:58.623 Abort (08h): Supported 00:16:58.623 Set Features (09h): Supported 00:16:58.623 Get Features (0Ah): Supported 00:16:58.623 Asynchronous Event Request (0Ch): Supported 00:16:58.623 Keep Alive (18h): Supported 00:16:58.623 I/O Commands 00:16:58.623 ------------ 00:16:58.623 Flush (00h): Supported LBA-Change 00:16:58.623 Write (01h): Supported LBA-Change 00:16:58.623 Read (02h): Supported 00:16:58.623 Compare (05h): Supported 00:16:58.623 Write Zeroes (08h): Supported LBA-Change 00:16:58.623 Dataset Management (09h): Supported LBA-Change 00:16:58.623 Copy (19h): Supported LBA-Change 00:16:58.623 Unknown (79h): Supported LBA-Change 00:16:58.623 Unknown (7Ah): Supported 00:16:58.623 00:16:58.623 Error Log 00:16:58.623 ========= 00:16:58.623 00:16:58.623 Arbitration 00:16:58.623 =========== 00:16:58.623 Arbitration Burst: 1 00:16:58.623 00:16:58.623 Power Management 00:16:58.623 ================ 00:16:58.623 Number of Power States: 1 00:16:58.623 Current Power State: Power State #0 00:16:58.623 Power State #0: 00:16:58.623 Max Power: 0.00 W 00:16:58.623 Non-Operational State: Operational 00:16:58.623 Entry Latency: Not Reported 00:16:58.623 Exit Latency: Not Reported 00:16:58.623 Relative Read Throughput: 0 00:16:58.623 Relative Read Latency: 0 00:16:58.623 Relative Write Throughput: 0 00:16:58.623 Relative Write Latency: 0 00:16:58.623 Idle Power: Not Reported 00:16:58.623 Active Power: Not Reported 00:16:58.623 Non-Operational Permissive Mode: Not Supported 00:16:58.623 00:16:58.623 Health Information 00:16:58.623 ================== 00:16:58.623 Critical Warnings: 00:16:58.623 Available Spare Space: OK 00:16:58.623 Temperature: OK 00:16:58.623 Device Reliability: OK 00:16:58.623 Read Only: No 00:16:58.623 Volatile Memory Backup: OK 00:16:58.623 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:58.623 Temperature Threshold: [2024-04-26 14:21:40.098206] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.623 [2024-04-26 14:21:40.098219] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a1fd70) 00:16:58.623 [2024-04-26 14:21:40.098232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.623 [2024-04-26 14:21:40.098256] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a89860, cid 7, qid 0 00:16:58.623 [2024-04-26 14:21:40.098398] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.623 [2024-04-26 14:21:40.098412] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.623 [2024-04-26 14:21:40.098420] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.623 [2024-04-26 14:21:40.098428] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a89860) on tqpair=0x1a1fd70 00:16:58.623 [2024-04-26 14:21:40.098474] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:16:58.623 [2024-04-26 14:21:40.098498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.623 [2024-04-26 14:21:40.098512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.623 [2024-04-26 14:21:40.098524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.623 [2024-04-26 14:21:40.098535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.623 [2024-04-26 14:21:40.098551] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.623 [2024-04-26 14:21:40.098560] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.623 [2024-04-26 14:21:40.098568] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a1fd70) 00:16:58.623 [2024-04-26 14:21:40.098581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.623 [2024-04-26 14:21:40.098606] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a892e0, cid 3, qid 0 00:16:58.623 [2024-04-26 14:21:40.098743] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.623 [2024-04-26 14:21:40.098758] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.623 [2024-04-26 14:21:40.098766] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.623 [2024-04-26 14:21:40.098774] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a892e0) on tqpair=0x1a1fd70 00:16:58.623 [2024-04-26 14:21:40.098789] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.623 [2024-04-26 14:21:40.098798] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.623 [2024-04-26 14:21:40.098806] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a1fd70) 00:16:58.623 [2024-04-26 14:21:40.098818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.623 [2024-04-26 14:21:40.098850] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a892e0, cid 3, qid 0 00:16:58.623 [2024-04-26 14:21:40.099003] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.623 [2024-04-26 14:21:40.099019] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.623 [2024-04-26 14:21:40.099027] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.623 [2024-04-26 14:21:40.099035] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a892e0) on tqpair=0x1a1fd70 00:16:58.623 [2024-04-26 14:21:40.099047] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:16:58.623 [2024-04-26 14:21:40.099057] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:16:58.623 [2024-04-26 14:21:40.099075] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.623 [2024-04-26 14:21:40.099085] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.623 [2024-04-26 14:21:40.099093] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a1fd70) 00:16:58.623 [2024-04-26 14:21:40.099105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.623 [2024-04-26 14:21:40.099127] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a892e0, cid 3, qid 0 00:16:58.623 [2024-04-26 14:21:40.099259] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.623 [2024-04-26 14:21:40.099274] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.623 [2024-04-26 14:21:40.099282] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.623 [2024-04-26 14:21:40.099290] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a892e0) on tqpair=0x1a1fd70 00:16:58.623 [2024-04-26 14:21:40.099310] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.623 [2024-04-26 14:21:40.099320] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.623 [2024-04-26 14:21:40.099328] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a1fd70) 00:16:58.623 [2024-04-26 14:21:40.099340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.623 [2024-04-26 14:21:40.099362] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a892e0, cid 3, qid 0 00:16:58.623 [2024-04-26 14:21:40.099489] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.623 [2024-04-26 14:21:40.099505] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.623 [2024-04-26 14:21:40.099513] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.623 [2024-04-26 14:21:40.099521] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a892e0) on tqpair=0x1a1fd70 00:16:58.623 [2024-04-26 14:21:40.099540] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.623 [2024-04-26 14:21:40.099551] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.623 [2024-04-26 14:21:40.099559] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a1fd70) 00:16:58.623 [2024-04-26 14:21:40.099571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.623 [2024-04-26 14:21:40.099593] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a892e0, cid 3, qid 0 00:16:58.623 [2024-04-26 14:21:40.103645] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.623 [2024-04-26 14:21:40.103663] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.623 [2024-04-26 14:21:40.103671] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.623 [2024-04-26 14:21:40.103679] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a892e0) on tqpair=0x1a1fd70 00:16:58.623 [2024-04-26 14:21:40.103701] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:58.623 [2024-04-26 14:21:40.103711] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:58.623 [2024-04-26 14:21:40.103719] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a1fd70) 00:16:58.623 [2024-04-26 14:21:40.103736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:58.623 [2024-04-26 14:21:40.103760] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a892e0, cid 3, qid 0 00:16:58.623 [2024-04-26 14:21:40.103868] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:58.623 [2024-04-26 14:21:40.103881] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:58.623 [2024-04-26 14:21:40.103889] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:58.623 [2024-04-26 14:21:40.103897] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a892e0) on tqpair=0x1a1fd70 00:16:58.623 [2024-04-26 14:21:40.103913] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:16:58.623 0 Kelvin (-273 Celsius) 00:16:58.623 Available Spare: 0% 00:16:58.623 Available Spare Threshold: 0% 00:16:58.623 Life Percentage Used: 0% 00:16:58.623 Data Units Read: 0 00:16:58.623 Data Units Written: 0 00:16:58.623 Host Read Commands: 0 00:16:58.623 Host Write Commands: 0 00:16:58.624 Controller Busy Time: 0 minutes 00:16:58.624 Power Cycles: 0 00:16:58.624 Power On Hours: 0 hours 00:16:58.624 Unsafe Shutdowns: 0 00:16:58.624 Unrecoverable Media Errors: 0 00:16:58.624 Lifetime Error Log Entries: 0 00:16:58.624 Warning Temperature Time: 0 minutes 00:16:58.624 Critical Temperature Time: 0 minutes 00:16:58.624 00:16:58.624 Number of Queues 00:16:58.624 ================ 00:16:58.624 Number of I/O Submission Queues: 127 00:16:58.624 Number of I/O Completion Queues: 127 00:16:58.624 00:16:58.624 Active Namespaces 00:16:58.624 ================= 00:16:58.624 Namespace ID:1 00:16:58.624 Error Recovery Timeout: Unlimited 00:16:58.624 Command Set Identifier: NVM (00h) 00:16:58.624 Deallocate: Supported 00:16:58.624 Deallocated/Unwritten Error: Not Supported 00:16:58.624 Deallocated Read Value: Unknown 00:16:58.624 Deallocate in Write Zeroes: Not Supported 00:16:58.624 Deallocated Guard Field: 0xFFFF 00:16:58.624 Flush: Supported 00:16:58.624 Reservation: Supported 00:16:58.624 Namespace Sharing Capabilities: Multiple Controllers 00:16:58.624 Size (in LBAs): 131072 (0GiB) 00:16:58.624 Capacity (in LBAs): 131072 (0GiB) 00:16:58.624 Utilization (in LBAs): 131072 (0GiB) 00:16:58.624 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:58.624 EUI64: ABCDEF0123456789 00:16:58.624 UUID: 3ebe265d-9cb5-40b7-af12-34e7341ef26e 00:16:58.624 Thin Provisioning: Not Supported 00:16:58.624 Per-NS Atomic Units: Yes 00:16:58.624 Atomic Boundary Size (Normal): 0 00:16:58.624 Atomic Boundary Size (PFail): 0 00:16:58.624 Atomic Boundary Offset: 0 00:16:58.624 Maximum Single Source Range Length: 65535 00:16:58.624 Maximum Copy Length: 65535 00:16:58.624 Maximum Source Range Count: 1 00:16:58.624 NGUID/EUI64 Never Reused: No 00:16:58.624 Namespace Write Protected: No 00:16:58.624 Number of LBA Formats: 1 00:16:58.624 Current LBA Format: LBA Format #00 00:16:58.624 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:58.624 00:16:58.624 14:21:40 -- host/identify.sh@51 -- # sync 00:16:58.624 14:21:40 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:58.624 14:21:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.624 14:21:40 -- common/autotest_common.sh@10 -- # set +x 00:16:58.624 14:21:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.624 14:21:40 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:58.624 14:21:40 -- host/identify.sh@56 -- # nvmftestfini 00:16:58.624 14:21:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:58.624 14:21:40 -- nvmf/common.sh@117 -- # sync 00:16:58.624 14:21:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:58.624 14:21:40 -- nvmf/common.sh@120 -- # set +e 00:16:58.624 14:21:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:58.624 14:21:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:58.624 rmmod nvme_tcp 00:16:58.624 rmmod nvme_fabrics 00:16:58.624 rmmod nvme_keyring 00:16:58.882 14:21:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:58.882 14:21:40 -- nvmf/common.sh@124 -- # set -e 00:16:58.882 14:21:40 -- nvmf/common.sh@125 -- # return 0 00:16:58.882 14:21:40 -- nvmf/common.sh@478 -- # '[' -n 3178448 ']' 00:16:58.882 14:21:40 -- nvmf/common.sh@479 -- # killprocess 3178448 00:16:58.882 14:21:40 -- common/autotest_common.sh@936 -- # '[' -z 3178448 ']' 00:16:58.882 14:21:40 -- common/autotest_common.sh@940 -- # kill -0 3178448 00:16:58.882 14:21:40 -- common/autotest_common.sh@941 -- # uname 00:16:58.882 14:21:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:58.882 14:21:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3178448 00:16:58.882 14:21:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:58.882 14:21:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:58.882 14:21:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3178448' 00:16:58.882 killing process with pid 3178448 00:16:58.882 14:21:40 -- common/autotest_common.sh@955 -- # kill 3178448 00:16:58.882 [2024-04-26 14:21:40.216628] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:58.882 14:21:40 -- common/autotest_common.sh@960 -- # wait 3178448 00:16:59.141 14:21:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:59.141 14:21:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:59.141 14:21:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:59.141 14:21:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:59.141 14:21:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:59.141 14:21:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.141 14:21:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.141 14:21:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.046 14:21:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:01.046 00:17:01.046 real 0m5.024s 00:17:01.046 user 0m4.251s 00:17:01.046 sys 0m1.590s 00:17:01.046 14:21:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:01.046 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:17:01.046 ************************************ 00:17:01.046 END TEST nvmf_identify 00:17:01.047 ************************************ 00:17:01.047 14:21:42 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:01.047 14:21:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:01.047 14:21:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:01.047 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:17:01.304 ************************************ 00:17:01.305 START TEST nvmf_perf 00:17:01.305 ************************************ 00:17:01.305 14:21:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:01.305 * Looking for test storage... 00:17:01.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:01.305 14:21:42 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.305 14:21:42 -- nvmf/common.sh@7 -- # uname -s 00:17:01.305 14:21:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.305 14:21:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.305 14:21:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.305 14:21:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.305 14:21:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.305 14:21:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.305 14:21:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.305 14:21:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.305 14:21:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.305 14:21:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.305 14:21:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:01.305 14:21:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:01.305 14:21:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.305 14:21:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.305 14:21:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.305 14:21:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.305 14:21:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:01.305 14:21:42 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.305 14:21:42 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.305 14:21:42 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.305 14:21:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.305 14:21:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.305 14:21:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.305 14:21:42 -- paths/export.sh@5 -- # export PATH 00:17:01.305 14:21:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.305 14:21:42 -- nvmf/common.sh@47 -- # : 0 00:17:01.305 14:21:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:01.305 14:21:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:01.305 14:21:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.305 14:21:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.305 14:21:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.305 14:21:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:01.305 14:21:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:01.305 14:21:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:01.305 14:21:42 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:01.305 14:21:42 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:01.305 14:21:42 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:01.305 14:21:42 -- host/perf.sh@17 -- # nvmftestinit 00:17:01.305 14:21:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:01.305 14:21:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.305 14:21:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:01.305 14:21:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:01.305 14:21:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:01.305 14:21:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.305 14:21:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.305 14:21:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.305 14:21:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:01.305 14:21:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:01.305 14:21:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:01.305 14:21:42 -- common/autotest_common.sh@10 -- # set +x 00:17:03.206 14:21:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:03.206 14:21:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:03.206 14:21:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:03.206 14:21:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:03.206 14:21:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:03.206 14:21:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:03.206 14:21:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:03.206 14:21:44 -- nvmf/common.sh@295 -- # net_devs=() 00:17:03.206 14:21:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:03.206 14:21:44 -- nvmf/common.sh@296 -- # e810=() 00:17:03.206 14:21:44 -- nvmf/common.sh@296 -- # local -ga e810 00:17:03.206 14:21:44 -- nvmf/common.sh@297 -- # x722=() 00:17:03.206 14:21:44 -- nvmf/common.sh@297 -- # local -ga x722 00:17:03.206 14:21:44 -- nvmf/common.sh@298 -- # mlx=() 00:17:03.206 14:21:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:03.206 14:21:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:03.206 14:21:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:03.206 14:21:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:03.206 14:21:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:03.206 14:21:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:03.206 14:21:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:03.206 14:21:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:03.206 14:21:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:03.206 14:21:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:03.206 14:21:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:03.206 14:21:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:03.206 14:21:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:03.206 14:21:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:03.206 14:21:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:03.206 14:21:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:03.206 14:21:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:03.206 14:21:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:03.206 14:21:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:03.206 14:21:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:03.206 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:03.206 14:21:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:03.206 14:21:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:03.206 14:21:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.206 14:21:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.206 14:21:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:03.206 14:21:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:03.206 14:21:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:03.206 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:03.206 14:21:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:03.206 14:21:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:03.206 14:21:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.206 14:21:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.206 14:21:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:03.206 14:21:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:03.206 14:21:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:03.206 14:21:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:03.206 14:21:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:03.207 14:21:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.207 14:21:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:03.207 14:21:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.207 14:21:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:03.207 Found net devices under 0000:08:00.0: cvl_0_0 00:17:03.207 14:21:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.207 14:21:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:03.207 14:21:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.207 14:21:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:03.207 14:21:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.207 14:21:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:03.207 Found net devices under 0000:08:00.1: cvl_0_1 00:17:03.207 14:21:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.207 14:21:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:03.207 14:21:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:03.207 14:21:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:03.207 14:21:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:03.207 14:21:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:03.207 14:21:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.207 14:21:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.207 14:21:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:03.207 14:21:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:03.207 14:21:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:03.207 14:21:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:03.207 14:21:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:03.207 14:21:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:03.207 14:21:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.207 14:21:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:03.207 14:21:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:03.207 14:21:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:03.207 14:21:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:03.207 14:21:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:03.207 14:21:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:03.207 14:21:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:03.207 14:21:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:03.207 14:21:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:03.207 14:21:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:03.207 14:21:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:03.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:17:03.207 00:17:03.207 --- 10.0.0.2 ping statistics --- 00:17:03.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.207 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:17:03.207 14:21:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:03.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:17:03.207 00:17:03.207 --- 10.0.0.1 ping statistics --- 00:17:03.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.207 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:17:03.207 14:21:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.207 14:21:44 -- nvmf/common.sh@411 -- # return 0 00:17:03.207 14:21:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:03.207 14:21:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.207 14:21:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:03.207 14:21:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:03.207 14:21:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.207 14:21:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:03.207 14:21:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:03.207 14:21:44 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:03.207 14:21:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:03.207 14:21:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:03.207 14:21:44 -- common/autotest_common.sh@10 -- # set +x 00:17:03.207 14:21:44 -- nvmf/common.sh@470 -- # nvmfpid=3180063 00:17:03.207 14:21:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:03.207 14:21:44 -- nvmf/common.sh@471 -- # waitforlisten 3180063 00:17:03.207 14:21:44 -- common/autotest_common.sh@817 -- # '[' -z 3180063 ']' 00:17:03.207 14:21:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.207 14:21:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:03.207 14:21:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.207 14:21:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:03.207 14:21:44 -- common/autotest_common.sh@10 -- # set +x 00:17:03.207 [2024-04-26 14:21:44.491264] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:17:03.207 [2024-04-26 14:21:44.491366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.207 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.207 [2024-04-26 14:21:44.557702] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:03.207 [2024-04-26 14:21:44.676128] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.207 [2024-04-26 14:21:44.676189] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.207 [2024-04-26 14:21:44.676204] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.207 [2024-04-26 14:21:44.676217] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.207 [2024-04-26 14:21:44.676229] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.207 [2024-04-26 14:21:44.676309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.207 [2024-04-26 14:21:44.676346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.207 [2024-04-26 14:21:44.676396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:03.207 [2024-04-26 14:21:44.676399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.465 14:21:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:03.465 14:21:44 -- common/autotest_common.sh@850 -- # return 0 00:17:03.465 14:21:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:03.465 14:21:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:03.465 14:21:44 -- common/autotest_common.sh@10 -- # set +x 00:17:03.465 14:21:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.465 14:21:44 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:03.465 14:21:44 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:17:06.744 14:21:47 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:17:06.744 14:21:47 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:06.744 14:21:48 -- host/perf.sh@30 -- # local_nvme_trid=0000:84:00.0 00:17:06.744 14:21:48 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:07.002 14:21:48 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:07.002 14:21:48 -- host/perf.sh@33 -- # '[' -n 0000:84:00.0 ']' 00:17:07.002 14:21:48 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:07.002 14:21:48 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:07.002 14:21:48 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:07.260 [2024-04-26 14:21:48.823863] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.518 14:21:48 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:07.776 14:21:49 -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:07.776 14:21:49 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:08.034 14:21:49 -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:08.034 14:21:49 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:08.034 14:21:49 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:08.292 [2024-04-26 14:21:49.814251] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.292 14:21:49 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:08.550 14:21:50 -- host/perf.sh@52 -- # '[' -n 0000:84:00.0 ']' 00:17:08.550 14:21:50 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:17:08.550 14:21:50 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:08.550 14:21:50 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:17:09.921 Initializing NVMe Controllers 00:17:09.921 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:17:09.921 Associating PCIE (0000:84:00.0) NSID 1 with lcore 0 00:17:09.921 Initialization complete. Launching workers. 00:17:09.922 ======================================================== 00:17:09.922 Latency(us) 00:17:09.922 Device Information : IOPS MiB/s Average min max 00:17:09.922 PCIE (0000:84:00.0) NSID 1 from core 0: 66561.88 260.01 479.86 55.91 4388.24 00:17:09.922 ======================================================== 00:17:09.922 Total : 66561.88 260.01 479.86 55.91 4388.24 00:17:09.922 00:17:09.922 14:21:51 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:09.922 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.336 Initializing NVMe Controllers 00:17:11.336 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:11.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:11.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:11.337 Initialization complete. Launching workers. 00:17:11.337 ======================================================== 00:17:11.337 Latency(us) 00:17:11.337 Device Information : IOPS MiB/s Average min max 00:17:11.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.00 0.31 12956.42 179.16 44796.94 00:17:11.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 49.00 0.19 20988.10 7928.20 47912.57 00:17:11.337 ======================================================== 00:17:11.337 Total : 128.00 0.50 16031.05 179.16 47912.57 00:17:11.337 00:17:11.337 14:21:52 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:11.337 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.711 Initializing NVMe Controllers 00:17:12.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:12.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:12.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:12.711 Initialization complete. Launching workers. 00:17:12.711 ======================================================== 00:17:12.711 Latency(us) 00:17:12.711 Device Information : IOPS MiB/s Average min max 00:17:12.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7671.13 29.97 4174.22 773.49 9579.32 00:17:12.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3743.74 14.62 8627.69 5186.54 17154.50 00:17:12.711 ======================================================== 00:17:12.711 Total : 11414.87 44.59 5634.83 773.49 17154.50 00:17:12.711 00:17:12.711 14:21:53 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:17:12.711 14:21:53 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:17:12.711 14:21:53 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:12.711 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.240 Initializing NVMe Controllers 00:17:15.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:15.240 Controller IO queue size 128, less than required. 00:17:15.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:15.240 Controller IO queue size 128, less than required. 00:17:15.241 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:15.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:15.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:15.241 Initialization complete. Launching workers. 00:17:15.241 ======================================================== 00:17:15.241 Latency(us) 00:17:15.241 Device Information : IOPS MiB/s Average min max 00:17:15.241 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1502.07 375.52 86988.77 49897.81 152816.80 00:17:15.241 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 561.59 140.40 231707.13 86409.05 302753.99 00:17:15.241 ======================================================== 00:17:15.241 Total : 2063.66 515.92 126371.49 49897.81 302753.99 00:17:15.241 00:17:15.241 14:21:56 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:17:15.241 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.241 No valid NVMe controllers or AIO or URING devices found 00:17:15.241 Initializing NVMe Controllers 00:17:15.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:15.241 Controller IO queue size 128, less than required. 00:17:15.241 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:15.241 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:15.241 Controller IO queue size 128, less than required. 00:17:15.241 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:15.241 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:17:15.241 WARNING: Some requested NVMe devices were skipped 00:17:15.241 14:21:56 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:17:15.241 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.828 Initializing NVMe Controllers 00:17:17.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:17.828 Controller IO queue size 128, less than required. 00:17:17.828 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:17.828 Controller IO queue size 128, less than required. 00:17:17.828 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:17.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:17.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:17.828 Initialization complete. Launching workers. 00:17:17.828 00:17:17.828 ==================== 00:17:17.828 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:17.828 TCP transport: 00:17:17.828 polls: 7336 00:17:17.828 idle_polls: 4346 00:17:17.828 sock_completions: 2990 00:17:17.828 nvme_completions: 5497 00:17:17.828 submitted_requests: 8264 00:17:17.828 queued_requests: 1 00:17:17.828 00:17:17.828 ==================== 00:17:17.828 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:17.828 TCP transport: 00:17:17.828 polls: 10115 00:17:17.828 idle_polls: 6885 00:17:17.828 sock_completions: 3230 00:17:17.828 nvme_completions: 5651 00:17:17.828 submitted_requests: 8518 00:17:17.828 queued_requests: 1 00:17:17.828 ======================================================== 00:17:17.828 Latency(us) 00:17:17.828 Device Information : IOPS MiB/s Average min max 00:17:17.828 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1373.14 343.28 95043.99 65849.11 137380.94 00:17:17.828 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1411.62 352.90 91948.68 42456.28 128956.08 00:17:17.828 ======================================================== 00:17:17.828 Total : 2784.76 696.19 93474.95 42456.28 137380.94 00:17:17.828 00:17:17.828 14:21:59 -- host/perf.sh@66 -- # sync 00:17:17.828 14:21:59 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:17.828 14:21:59 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:17.828 14:21:59 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:17.828 14:21:59 -- host/perf.sh@114 -- # nvmftestfini 00:17:17.828 14:21:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:17.828 14:21:59 -- nvmf/common.sh@117 -- # sync 00:17:17.828 14:21:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:17.828 14:21:59 -- nvmf/common.sh@120 -- # set +e 00:17:17.828 14:21:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:17.828 14:21:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:17.828 rmmod nvme_tcp 00:17:17.828 rmmod nvme_fabrics 00:17:17.828 rmmod nvme_keyring 00:17:17.828 14:21:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:18.086 14:21:59 -- nvmf/common.sh@124 -- # set -e 00:17:18.086 14:21:59 -- nvmf/common.sh@125 -- # return 0 00:17:18.086 14:21:59 -- nvmf/common.sh@478 -- # '[' -n 3180063 ']' 00:17:18.086 14:21:59 -- nvmf/common.sh@479 -- # killprocess 3180063 00:17:18.086 14:21:59 -- common/autotest_common.sh@936 -- # '[' -z 3180063 ']' 00:17:18.086 14:21:59 -- common/autotest_common.sh@940 -- # kill -0 3180063 00:17:18.086 14:21:59 -- common/autotest_common.sh@941 -- # uname 00:17:18.086 14:21:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:18.086 14:21:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3180063 00:17:18.086 14:21:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:18.086 14:21:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:18.086 14:21:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3180063' 00:17:18.086 killing process with pid 3180063 00:17:18.086 14:21:59 -- common/autotest_common.sh@955 -- # kill 3180063 00:17:18.086 14:21:59 -- common/autotest_common.sh@960 -- # wait 3180063 00:17:19.987 14:22:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:19.987 14:22:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:19.987 14:22:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:19.987 14:22:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:19.987 14:22:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:19.987 14:22:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.987 14:22:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.987 14:22:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.906 14:22:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:21.906 00:17:21.906 real 0m20.467s 00:17:21.906 user 1m4.311s 00:17:21.906 sys 0m4.732s 00:17:21.906 14:22:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:21.906 14:22:03 -- common/autotest_common.sh@10 -- # set +x 00:17:21.906 ************************************ 00:17:21.906 END TEST nvmf_perf 00:17:21.906 ************************************ 00:17:21.906 14:22:03 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:21.906 14:22:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:21.906 14:22:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:21.906 14:22:03 -- common/autotest_common.sh@10 -- # set +x 00:17:21.906 ************************************ 00:17:21.906 START TEST nvmf_fio_host 00:17:21.906 ************************************ 00:17:21.906 14:22:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:21.906 * Looking for test storage... 00:17:21.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:21.906 14:22:03 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:21.906 14:22:03 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.906 14:22:03 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.906 14:22:03 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.906 14:22:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.906 14:22:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.906 14:22:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.906 14:22:03 -- paths/export.sh@5 -- # export PATH 00:17:21.906 14:22:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.907 14:22:03 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:21.907 14:22:03 -- nvmf/common.sh@7 -- # uname -s 00:17:21.907 14:22:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.907 14:22:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.907 14:22:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.907 14:22:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.907 14:22:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.907 14:22:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.907 14:22:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.907 14:22:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.907 14:22:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.907 14:22:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.907 14:22:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:21.907 14:22:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:21.907 14:22:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.907 14:22:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.907 14:22:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:21.907 14:22:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.907 14:22:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:21.907 14:22:03 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.907 14:22:03 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.907 14:22:03 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.907 14:22:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.907 14:22:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.907 14:22:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.907 14:22:03 -- paths/export.sh@5 -- # export PATH 00:17:21.907 14:22:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.907 14:22:03 -- nvmf/common.sh@47 -- # : 0 00:17:21.907 14:22:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:21.907 14:22:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:21.907 14:22:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.907 14:22:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.907 14:22:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.907 14:22:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:21.907 14:22:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:21.907 14:22:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:21.907 14:22:03 -- host/fio.sh@12 -- # nvmftestinit 00:17:21.907 14:22:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:21.907 14:22:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.907 14:22:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:21.907 14:22:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:21.907 14:22:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:21.907 14:22:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.907 14:22:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.907 14:22:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.907 14:22:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:21.907 14:22:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:21.907 14:22:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:21.907 14:22:03 -- common/autotest_common.sh@10 -- # set +x 00:17:23.807 14:22:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:23.807 14:22:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:23.807 14:22:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:23.807 14:22:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:23.807 14:22:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:23.807 14:22:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:23.807 14:22:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:23.807 14:22:04 -- nvmf/common.sh@295 -- # net_devs=() 00:17:23.807 14:22:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:23.807 14:22:04 -- nvmf/common.sh@296 -- # e810=() 00:17:23.807 14:22:04 -- nvmf/common.sh@296 -- # local -ga e810 00:17:23.807 14:22:04 -- nvmf/common.sh@297 -- # x722=() 00:17:23.807 14:22:04 -- nvmf/common.sh@297 -- # local -ga x722 00:17:23.807 14:22:04 -- nvmf/common.sh@298 -- # mlx=() 00:17:23.807 14:22:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:23.807 14:22:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:23.807 14:22:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:23.807 14:22:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:23.807 14:22:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:23.807 14:22:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:23.807 14:22:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:23.807 14:22:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:23.807 14:22:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:23.807 14:22:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:23.807 14:22:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:23.807 14:22:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:23.807 14:22:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:23.807 14:22:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:23.807 14:22:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:23.807 14:22:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:23.807 14:22:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:23.807 14:22:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:23.807 14:22:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:23.807 14:22:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:23.807 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:23.807 14:22:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:23.807 14:22:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:23.807 14:22:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:23.807 14:22:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:23.807 14:22:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:23.807 14:22:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:23.807 14:22:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:23.807 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:23.807 14:22:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:23.807 14:22:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:23.807 14:22:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:23.808 14:22:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:23.808 14:22:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:23.808 14:22:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:23.808 14:22:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:23.808 14:22:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:23.808 14:22:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:23.808 14:22:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.808 14:22:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:23.808 14:22:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.808 14:22:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:23.808 Found net devices under 0000:08:00.0: cvl_0_0 00:17:23.808 14:22:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.808 14:22:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:23.808 14:22:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:23.808 14:22:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:23.808 14:22:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:23.808 14:22:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:23.808 Found net devices under 0000:08:00.1: cvl_0_1 00:17:23.808 14:22:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:23.808 14:22:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:23.808 14:22:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:23.808 14:22:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:23.808 14:22:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:23.808 14:22:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:23.808 14:22:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:23.808 14:22:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:23.808 14:22:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:23.808 14:22:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:23.808 14:22:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:23.808 14:22:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:23.808 14:22:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:23.808 14:22:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:23.808 14:22:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:23.808 14:22:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:23.808 14:22:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:23.808 14:22:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:23.808 14:22:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:23.808 14:22:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:23.808 14:22:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:23.808 14:22:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:23.808 14:22:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:23.808 14:22:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:23.808 14:22:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:23.808 14:22:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:23.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:23.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:17:23.808 00:17:23.808 --- 10.0.0.2 ping statistics --- 00:17:23.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.808 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:17:23.808 14:22:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:23.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:23.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:17:23.808 00:17:23.808 --- 10.0.0.1 ping statistics --- 00:17:23.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:23.808 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:17:23.808 14:22:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:23.808 14:22:05 -- nvmf/common.sh@411 -- # return 0 00:17:23.808 14:22:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:23.808 14:22:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:23.808 14:22:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:23.808 14:22:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:23.808 14:22:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:23.808 14:22:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:23.808 14:22:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:23.808 14:22:05 -- host/fio.sh@14 -- # [[ y != y ]] 00:17:23.808 14:22:05 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:17:23.808 14:22:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:23.808 14:22:05 -- common/autotest_common.sh@10 -- # set +x 00:17:23.808 14:22:05 -- host/fio.sh@22 -- # nvmfpid=3183122 00:17:23.808 14:22:05 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:23.808 14:22:05 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:23.808 14:22:05 -- host/fio.sh@26 -- # waitforlisten 3183122 00:17:23.808 14:22:05 -- common/autotest_common.sh@817 -- # '[' -z 3183122 ']' 00:17:23.808 14:22:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.808 14:22:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:23.808 14:22:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.808 14:22:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:23.808 14:22:05 -- common/autotest_common.sh@10 -- # set +x 00:17:23.808 [2024-04-26 14:22:05.141809] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:17:23.808 [2024-04-26 14:22:05.141911] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.808 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.808 [2024-04-26 14:22:05.211364] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:23.808 [2024-04-26 14:22:05.329440] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:23.808 [2024-04-26 14:22:05.329503] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:23.808 [2024-04-26 14:22:05.329519] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:23.808 [2024-04-26 14:22:05.329540] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:23.808 [2024-04-26 14:22:05.329552] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:23.808 [2024-04-26 14:22:05.329610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.808 [2024-04-26 14:22:05.329668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.808 [2024-04-26 14:22:05.329757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:23.808 [2024-04-26 14:22:05.329789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.066 14:22:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:24.066 14:22:05 -- common/autotest_common.sh@850 -- # return 0 00:17:24.066 14:22:05 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:24.066 14:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.066 14:22:05 -- common/autotest_common.sh@10 -- # set +x 00:17:24.066 [2024-04-26 14:22:05.446218] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.066 14:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:24.066 14:22:05 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:17:24.066 14:22:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:24.066 14:22:05 -- common/autotest_common.sh@10 -- # set +x 00:17:24.066 14:22:05 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:24.066 14:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.066 14:22:05 -- common/autotest_common.sh@10 -- # set +x 00:17:24.066 Malloc1 00:17:24.066 14:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:24.066 14:22:05 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:24.066 14:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.066 14:22:05 -- common/autotest_common.sh@10 -- # set +x 00:17:24.066 14:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:24.066 14:22:05 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:24.066 14:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.066 14:22:05 -- common/autotest_common.sh@10 -- # set +x 00:17:24.067 14:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:24.067 14:22:05 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:24.067 14:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.067 14:22:05 -- common/autotest_common.sh@10 -- # set +x 00:17:24.067 [2024-04-26 14:22:05.524692] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.067 14:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:24.067 14:22:05 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:24.067 14:22:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.067 14:22:05 -- common/autotest_common.sh@10 -- # set +x 00:17:24.067 14:22:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:24.067 14:22:05 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:17:24.067 14:22:05 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:24.067 14:22:05 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:24.067 14:22:05 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:17:24.067 14:22:05 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:24.067 14:22:05 -- common/autotest_common.sh@1325 -- # local sanitizers 00:17:24.067 14:22:05 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:24.067 14:22:05 -- common/autotest_common.sh@1327 -- # shift 00:17:24.067 14:22:05 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:17:24.067 14:22:05 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:24.067 14:22:05 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:24.067 14:22:05 -- common/autotest_common.sh@1331 -- # grep libasan 00:17:24.067 14:22:05 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:24.067 14:22:05 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:24.067 14:22:05 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:24.067 14:22:05 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:24.067 14:22:05 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:24.067 14:22:05 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:17:24.067 14:22:05 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:24.067 14:22:05 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:24.067 14:22:05 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:24.067 14:22:05 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:17:24.067 14:22:05 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:24.324 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:24.324 fio-3.35 00:17:24.324 Starting 1 thread 00:17:24.324 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.848 00:17:26.848 test: (groupid=0, jobs=1): err= 0: pid=3183289: Fri Apr 26 14:22:08 2024 00:17:26.848 read: IOPS=7702, BW=30.1MiB/s (31.6MB/s)(60.4MiB/2008msec) 00:17:26.848 slat (usec): min=2, max=151, avg= 2.65, stdev= 1.75 00:17:26.848 clat (usec): min=2379, max=14892, avg=9101.40, stdev=765.88 00:17:26.848 lat (usec): min=2401, max=14895, avg=9104.05, stdev=765.74 00:17:26.848 clat percentiles (usec): 00:17:26.848 | 1.00th=[ 7373], 5.00th=[ 7898], 10.00th=[ 8160], 20.00th=[ 8455], 00:17:26.848 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9241], 00:17:26.848 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10290], 00:17:26.848 | 99.00th=[10814], 99.50th=[10945], 99.90th=[12911], 99.95th=[14091], 00:17:26.848 | 99.99th=[14877] 00:17:26.848 bw ( KiB/s): min=29840, max=31408, per=100.00%, avg=30820.00, stdev=727.50, samples=4 00:17:26.848 iops : min= 7460, max= 7852, avg=7705.00, stdev=181.88, samples=4 00:17:26.848 write: IOPS=7696, BW=30.1MiB/s (31.5MB/s)(60.4MiB/2008msec); 0 zone resets 00:17:26.848 slat (usec): min=2, max=134, avg= 2.77, stdev= 1.30 00:17:26.848 clat (usec): min=1840, max=14991, avg=7461.37, stdev=659.90 00:17:26.848 lat (usec): min=1849, max=14994, avg=7464.14, stdev=659.82 00:17:26.848 clat percentiles (usec): 00:17:26.848 | 1.00th=[ 6063], 5.00th=[ 6521], 10.00th=[ 6718], 20.00th=[ 6980], 00:17:26.848 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7635], 00:17:26.848 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8160], 95.00th=[ 8356], 00:17:26.848 | 99.00th=[ 8848], 99.50th=[ 9110], 99.90th=[13698], 99.95th=[14091], 00:17:26.848 | 99.99th=[15008] 00:17:26.848 bw ( KiB/s): min=30640, max=30976, per=99.95%, avg=30770.00, stdev=156.82, samples=4 00:17:26.848 iops : min= 7660, max= 7744, avg=7692.50, stdev=39.20, samples=4 00:17:26.848 lat (msec) : 2=0.02%, 4=0.10%, 10=94.33%, 20=5.56% 00:17:26.848 cpu : usr=65.87%, sys=31.69%, ctx=73, majf=0, minf=39 00:17:26.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:26.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:26.848 issued rwts: total=15467,15455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:26.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:26.848 00:17:26.848 Run status group 0 (all jobs): 00:17:26.848 READ: bw=30.1MiB/s (31.6MB/s), 30.1MiB/s-30.1MiB/s (31.6MB/s-31.6MB/s), io=60.4MiB (63.4MB), run=2008-2008msec 00:17:26.848 WRITE: bw=30.1MiB/s (31.5MB/s), 30.1MiB/s-30.1MiB/s (31.5MB/s-31.5MB/s), io=60.4MiB (63.3MB), run=2008-2008msec 00:17:26.848 14:22:08 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:26.848 14:22:08 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:26.848 14:22:08 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:17:26.848 14:22:08 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:26.848 14:22:08 -- common/autotest_common.sh@1325 -- # local sanitizers 00:17:26.848 14:22:08 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:26.848 14:22:08 -- common/autotest_common.sh@1327 -- # shift 00:17:26.848 14:22:08 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:17:26.848 14:22:08 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:26.848 14:22:08 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:26.848 14:22:08 -- common/autotest_common.sh@1331 -- # grep libasan 00:17:26.848 14:22:08 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:26.848 14:22:08 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:26.848 14:22:08 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:26.848 14:22:08 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:26.848 14:22:08 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:17:26.848 14:22:08 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:17:26.848 14:22:08 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:26.848 14:22:08 -- common/autotest_common.sh@1331 -- # asan_lib= 00:17:26.848 14:22:08 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:17:26.848 14:22:08 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:17:26.848 14:22:08 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:26.848 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:26.848 fio-3.35 00:17:26.848 Starting 1 thread 00:17:26.848 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.374 00:17:29.374 test: (groupid=0, jobs=1): err= 0: pid=3183553: Fri Apr 26 14:22:10 2024 00:17:29.374 read: IOPS=7529, BW=118MiB/s (123MB/s)(236MiB/2008msec) 00:17:29.374 slat (usec): min=3, max=120, avg= 4.05, stdev= 1.52 00:17:29.374 clat (usec): min=2479, max=17911, avg=9788.05, stdev=2235.93 00:17:29.374 lat (usec): min=2482, max=17914, avg=9792.10, stdev=2235.99 00:17:29.374 clat percentiles (usec): 00:17:29.374 | 1.00th=[ 5211], 5.00th=[ 6390], 10.00th=[ 7046], 20.00th=[ 7832], 00:17:29.374 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10159], 00:17:29.374 | 70.00th=[10945], 80.00th=[11731], 90.00th=[12649], 95.00th=[13566], 00:17:29.374 | 99.00th=[15795], 99.50th=[16581], 99.90th=[17433], 99.95th=[17695], 00:17:29.374 | 99.99th=[17957] 00:17:29.374 bw ( KiB/s): min=55008, max=68512, per=51.44%, avg=61968.00, stdev=7233.53, samples=4 00:17:29.374 iops : min= 3438, max= 4282, avg=3873.00, stdev=452.10, samples=4 00:17:29.374 write: IOPS=4475, BW=69.9MiB/s (73.3MB/s)(126MiB/1808msec); 0 zone resets 00:17:29.374 slat (usec): min=32, max=175, avg=37.32, stdev= 5.81 00:17:29.374 clat (usec): min=4307, max=22871, avg=12862.53, stdev=2116.29 00:17:29.374 lat (usec): min=4349, max=22913, avg=12899.85, stdev=2115.82 00:17:29.374 clat percentiles (usec): 00:17:29.374 | 1.00th=[ 8586], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10945], 00:17:29.374 | 30.00th=[11600], 40.00th=[12256], 50.00th=[12780], 60.00th=[13304], 00:17:29.374 | 70.00th=[13829], 80.00th=[14615], 90.00th=[15533], 95.00th=[16450], 00:17:29.374 | 99.00th=[18744], 99.50th=[20055], 99.90th=[21365], 99.95th=[21627], 00:17:29.374 | 99.99th=[22938] 00:17:29.374 bw ( KiB/s): min=56704, max=71680, per=89.71%, avg=64232.00, stdev=8034.46, samples=4 00:17:29.374 iops : min= 3544, max= 4480, avg=4014.50, stdev=502.15, samples=4 00:17:29.374 lat (msec) : 4=0.14%, 10=39.22%, 20=60.44%, 50=0.20% 00:17:29.374 cpu : usr=78.67%, sys=19.38%, ctx=49, majf=0, minf=63 00:17:29.374 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:17:29.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:29.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:29.374 issued rwts: total=15119,8091,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:29.374 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:29.374 00:17:29.374 Run status group 0 (all jobs): 00:17:29.374 READ: bw=118MiB/s (123MB/s), 118MiB/s-118MiB/s (123MB/s-123MB/s), io=236MiB (248MB), run=2008-2008msec 00:17:29.374 WRITE: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=126MiB (133MB), run=1808-1808msec 00:17:29.374 14:22:10 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:29.374 14:22:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.374 14:22:10 -- common/autotest_common.sh@10 -- # set +x 00:17:29.374 14:22:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.374 14:22:10 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:17:29.374 14:22:10 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:17:29.374 14:22:10 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:17:29.374 14:22:10 -- host/fio.sh@84 -- # nvmftestfini 00:17:29.374 14:22:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:29.374 14:22:10 -- nvmf/common.sh@117 -- # sync 00:17:29.374 14:22:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:29.374 14:22:10 -- nvmf/common.sh@120 -- # set +e 00:17:29.374 14:22:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:29.374 14:22:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:29.374 rmmod nvme_tcp 00:17:29.374 rmmod nvme_fabrics 00:17:29.374 rmmod nvme_keyring 00:17:29.374 14:22:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:29.374 14:22:10 -- nvmf/common.sh@124 -- # set -e 00:17:29.374 14:22:10 -- nvmf/common.sh@125 -- # return 0 00:17:29.374 14:22:10 -- nvmf/common.sh@478 -- # '[' -n 3183122 ']' 00:17:29.374 14:22:10 -- nvmf/common.sh@479 -- # killprocess 3183122 00:17:29.374 14:22:10 -- common/autotest_common.sh@936 -- # '[' -z 3183122 ']' 00:17:29.374 14:22:10 -- common/autotest_common.sh@940 -- # kill -0 3183122 00:17:29.374 14:22:10 -- common/autotest_common.sh@941 -- # uname 00:17:29.374 14:22:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:29.374 14:22:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3183122 00:17:29.374 14:22:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:29.374 14:22:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:29.374 14:22:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3183122' 00:17:29.374 killing process with pid 3183122 00:17:29.374 14:22:10 -- common/autotest_common.sh@955 -- # kill 3183122 00:17:29.374 14:22:10 -- common/autotest_common.sh@960 -- # wait 3183122 00:17:29.641 14:22:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:29.641 14:22:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:29.641 14:22:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:29.641 14:22:11 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:29.641 14:22:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:29.641 14:22:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.641 14:22:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:29.641 14:22:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.543 14:22:13 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:31.543 00:17:31.543 real 0m9.856s 00:17:31.543 user 0m26.159s 00:17:31.543 sys 0m3.433s 00:17:31.543 14:22:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:31.543 14:22:13 -- common/autotest_common.sh@10 -- # set +x 00:17:31.543 ************************************ 00:17:31.543 END TEST nvmf_fio_host 00:17:31.543 ************************************ 00:17:31.802 14:22:13 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:31.802 14:22:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:31.802 14:22:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:31.802 14:22:13 -- common/autotest_common.sh@10 -- # set +x 00:17:31.802 ************************************ 00:17:31.802 START TEST nvmf_failover 00:17:31.802 ************************************ 00:17:31.802 14:22:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:31.802 * Looking for test storage... 00:17:31.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:31.802 14:22:13 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:31.802 14:22:13 -- nvmf/common.sh@7 -- # uname -s 00:17:31.802 14:22:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:31.802 14:22:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:31.802 14:22:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:31.802 14:22:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:31.802 14:22:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:31.802 14:22:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:31.802 14:22:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:31.802 14:22:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:31.802 14:22:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:31.802 14:22:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:31.802 14:22:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:31.802 14:22:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:31.802 14:22:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:31.802 14:22:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:31.802 14:22:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:31.802 14:22:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:31.802 14:22:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:31.802 14:22:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:31.802 14:22:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:31.802 14:22:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:31.803 14:22:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.803 14:22:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.803 14:22:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.803 14:22:13 -- paths/export.sh@5 -- # export PATH 00:17:31.803 14:22:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:31.803 14:22:13 -- nvmf/common.sh@47 -- # : 0 00:17:31.803 14:22:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:31.803 14:22:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:31.803 14:22:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:31.803 14:22:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:31.803 14:22:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:31.803 14:22:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:31.803 14:22:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:31.803 14:22:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:31.803 14:22:13 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:31.803 14:22:13 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:31.803 14:22:13 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:31.803 14:22:13 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:31.803 14:22:13 -- host/failover.sh@18 -- # nvmftestinit 00:17:31.803 14:22:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:31.803 14:22:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:31.803 14:22:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:31.803 14:22:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:31.803 14:22:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:31.803 14:22:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.803 14:22:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:31.803 14:22:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:31.803 14:22:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:31.803 14:22:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:31.803 14:22:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:31.803 14:22:13 -- common/autotest_common.sh@10 -- # set +x 00:17:33.705 14:22:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:33.705 14:22:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:33.705 14:22:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:33.705 14:22:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:33.705 14:22:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:33.705 14:22:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:33.705 14:22:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:33.705 14:22:14 -- nvmf/common.sh@295 -- # net_devs=() 00:17:33.705 14:22:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:33.705 14:22:14 -- nvmf/common.sh@296 -- # e810=() 00:17:33.705 14:22:14 -- nvmf/common.sh@296 -- # local -ga e810 00:17:33.705 14:22:14 -- nvmf/common.sh@297 -- # x722=() 00:17:33.705 14:22:14 -- nvmf/common.sh@297 -- # local -ga x722 00:17:33.705 14:22:14 -- nvmf/common.sh@298 -- # mlx=() 00:17:33.705 14:22:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:33.705 14:22:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.705 14:22:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.705 14:22:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.705 14:22:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.705 14:22:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.705 14:22:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.705 14:22:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.705 14:22:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.705 14:22:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.705 14:22:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.705 14:22:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.705 14:22:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:33.705 14:22:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:33.705 14:22:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:33.705 14:22:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:33.705 14:22:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:33.705 14:22:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:33.705 14:22:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:33.705 14:22:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:33.705 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:33.705 14:22:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:33.705 14:22:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:33.705 14:22:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.705 14:22:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.705 14:22:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:33.705 14:22:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:33.705 14:22:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:33.705 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:33.705 14:22:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:33.705 14:22:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:33.705 14:22:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.705 14:22:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.705 14:22:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:33.706 14:22:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:33.706 14:22:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:33.706 14:22:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:33.706 14:22:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:33.706 14:22:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.706 14:22:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:33.706 14:22:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.706 14:22:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:33.706 Found net devices under 0000:08:00.0: cvl_0_0 00:17:33.706 14:22:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.706 14:22:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:33.706 14:22:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.706 14:22:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:33.706 14:22:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.706 14:22:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:33.706 Found net devices under 0000:08:00.1: cvl_0_1 00:17:33.706 14:22:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.706 14:22:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:33.706 14:22:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:33.706 14:22:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:33.706 14:22:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:33.706 14:22:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:33.706 14:22:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.706 14:22:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.706 14:22:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:33.706 14:22:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:33.706 14:22:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:33.706 14:22:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:33.706 14:22:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:33.706 14:22:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:33.706 14:22:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.706 14:22:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:33.706 14:22:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:33.706 14:22:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:33.706 14:22:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:33.706 14:22:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:33.706 14:22:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:33.706 14:22:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:33.706 14:22:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:33.706 14:22:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:33.706 14:22:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:33.706 14:22:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:33.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:17:33.706 00:17:33.706 --- 10.0.0.2 ping statistics --- 00:17:33.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.706 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:17:33.706 14:22:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:33.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:17:33.706 00:17:33.706 --- 10.0.0.1 ping statistics --- 00:17:33.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.706 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:17:33.706 14:22:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.706 14:22:14 -- nvmf/common.sh@411 -- # return 0 00:17:33.706 14:22:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:33.706 14:22:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.706 14:22:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:33.706 14:22:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:33.706 14:22:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.706 14:22:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:33.706 14:22:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:33.706 14:22:14 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:33.706 14:22:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:33.706 14:22:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:33.706 14:22:14 -- common/autotest_common.sh@10 -- # set +x 00:17:33.706 14:22:14 -- nvmf/common.sh@470 -- # nvmfpid=3185245 00:17:33.706 14:22:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:33.706 14:22:14 -- nvmf/common.sh@471 -- # waitforlisten 3185245 00:17:33.706 14:22:14 -- common/autotest_common.sh@817 -- # '[' -z 3185245 ']' 00:17:33.706 14:22:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.706 14:22:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:33.706 14:22:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.706 14:22:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:33.706 14:22:15 -- common/autotest_common.sh@10 -- # set +x 00:17:33.706 [2024-04-26 14:22:15.047865] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:17:33.706 [2024-04-26 14:22:15.047966] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.706 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.706 [2024-04-26 14:22:15.114050] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:33.706 [2024-04-26 14:22:15.231889] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.706 [2024-04-26 14:22:15.231954] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.706 [2024-04-26 14:22:15.231969] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.706 [2024-04-26 14:22:15.231983] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.706 [2024-04-26 14:22:15.231995] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.706 [2024-04-26 14:22:15.232082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.706 [2024-04-26 14:22:15.232136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:33.706 [2024-04-26 14:22:15.232139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.964 14:22:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:33.964 14:22:15 -- common/autotest_common.sh@850 -- # return 0 00:17:33.964 14:22:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:33.964 14:22:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:33.964 14:22:15 -- common/autotest_common.sh@10 -- # set +x 00:17:33.964 14:22:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.964 14:22:15 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:34.222 [2024-04-26 14:22:15.638362] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.222 14:22:15 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:34.480 Malloc0 00:17:34.480 14:22:15 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:34.738 14:22:16 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:34.996 14:22:16 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:35.562 [2024-04-26 14:22:16.832204] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.562 14:22:16 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:35.562 [2024-04-26 14:22:17.129026] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:35.820 14:22:17 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:36.078 [2024-04-26 14:22:17.418021] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:36.078 14:22:17 -- host/failover.sh@31 -- # bdevperf_pid=3185479 00:17:36.078 14:22:17 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:36.078 14:22:17 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:36.078 14:22:17 -- host/failover.sh@34 -- # waitforlisten 3185479 /var/tmp/bdevperf.sock 00:17:36.078 14:22:17 -- common/autotest_common.sh@817 -- # '[' -z 3185479 ']' 00:17:36.078 14:22:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:36.078 14:22:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:36.078 14:22:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:36.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:36.078 14:22:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:36.078 14:22:17 -- common/autotest_common.sh@10 -- # set +x 00:17:36.336 14:22:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:36.336 14:22:17 -- common/autotest_common.sh@850 -- # return 0 00:17:36.336 14:22:17 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:36.595 NVMe0n1 00:17:36.595 14:22:18 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:37.162 00:17:37.162 14:22:18 -- host/failover.sh@39 -- # run_test_pid=3185581 00:17:37.162 14:22:18 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:37.162 14:22:18 -- host/failover.sh@41 -- # sleep 1 00:17:38.537 14:22:19 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:38.537 [2024-04-26 14:22:19.949445] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949592] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949606] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949642] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949670] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949683] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949796] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949821] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949949] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.949987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.950000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.950013] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.950026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.950039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.950053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.950065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.950082] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.950095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.950108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.950121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.950134] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.950147] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.950159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.950172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.950185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.950198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.950211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 [2024-04-26 14:22:19.950224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101390 is same with the state(5) to be set 00:17:38.537 14:22:19 -- host/failover.sh@45 -- # sleep 3 00:17:41.821 14:22:22 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:41.821 00:17:41.821 14:22:23 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:42.078 [2024-04-26 14:22:23.622420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101f30 is same with the state(5) to be set 00:17:42.078 [2024-04-26 14:22:23.622486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101f30 is same with the state(5) to be set 00:17:42.078 [2024-04-26 14:22:23.622502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101f30 is same with the state(5) to be set 00:17:42.078 [2024-04-26 14:22:23.622516] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101f30 is same with the state(5) to be set 00:17:42.078 [2024-04-26 14:22:23.622529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101f30 is same with the state(5) to be set 00:17:42.078 [2024-04-26 14:22:23.622542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101f30 is same with the state(5) to be set 00:17:42.078 [2024-04-26 14:22:23.622555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101f30 is same with the state(5) to be set 00:17:42.078 [2024-04-26 14:22:23.622568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2101f30 is same with the state(5) to be set 00:17:42.078 14:22:23 -- host/failover.sh@50 -- # sleep 3 00:17:45.360 14:22:26 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.360 [2024-04-26 14:22:26.916347] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.618 14:22:26 -- host/failover.sh@55 -- # sleep 1 00:17:46.552 14:22:27 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:46.811 [2024-04-26 14:22:28.212980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.811 [2024-04-26 14:22:28.213044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.811 [2024-04-26 14:22:28.213060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.811 [2024-04-26 14:22:28.213074] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.811 [2024-04-26 14:22:28.213087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.811 [2024-04-26 14:22:28.213100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.811 [2024-04-26 14:22:28.213113] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.811 [2024-04-26 14:22:28.213126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.811 [2024-04-26 14:22:28.213140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.811 [2024-04-26 14:22:28.213153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.811 [2024-04-26 14:22:28.213166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.811 [2024-04-26 14:22:28.213180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213205] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213401] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213415] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213428] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213492] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213582] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213642] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 [2024-04-26 14:22:28.213669] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f58e20 is same with the state(5) to be set 00:17:46.812 14:22:28 -- host/failover.sh@59 -- # wait 3185581 00:17:53.380 0 00:17:53.380 14:22:33 -- host/failover.sh@61 -- # killprocess 3185479 00:17:53.380 14:22:33 -- common/autotest_common.sh@936 -- # '[' -z 3185479 ']' 00:17:53.380 14:22:33 -- common/autotest_common.sh@940 -- # kill -0 3185479 00:17:53.380 14:22:33 -- common/autotest_common.sh@941 -- # uname 00:17:53.380 14:22:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:53.380 14:22:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3185479 00:17:53.380 14:22:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:53.380 14:22:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:53.380 14:22:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3185479' 00:17:53.380 killing process with pid 3185479 00:17:53.380 14:22:33 -- common/autotest_common.sh@955 -- # kill 3185479 00:17:53.380 14:22:33 -- common/autotest_common.sh@960 -- # wait 3185479 00:17:53.380 14:22:34 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:17:53.380 [2024-04-26 14:22:17.484308] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:17:53.380 [2024-04-26 14:22:17.484432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3185479 ] 00:17:53.380 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.380 [2024-04-26 14:22:17.544538] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.380 [2024-04-26 14:22:17.659845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.380 Running I/O for 15 seconds... 00:17:53.380 [2024-04-26 14:22:19.950773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.380 [2024-04-26 14:22:19.950816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.380 [2024-04-26 14:22:19.950846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.380 [2024-04-26 14:22:19.950863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.380 [2024-04-26 14:22:19.950882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.380 [2024-04-26 14:22:19.950897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.380 [2024-04-26 14:22:19.950919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.380 [2024-04-26 14:22:19.950935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.950951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.950967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.950983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.950998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.951973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.951988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.952009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.952025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.952041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.952056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.952080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.952095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.952111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.952126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.952144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.952159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.952176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.952190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.952210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.952225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.952241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.952264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.952280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.952295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.952312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.952327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.952343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.952357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.952375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.952390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.381 [2024-04-26 14:22:19.952407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.381 [2024-04-26 14:22:19.952426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.952444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.382 [2024-04-26 14:22:19.952459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.952476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.382 [2024-04-26 14:22:19.952490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.952506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.382 [2024-04-26 14:22:19.952521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.952537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.382 [2024-04-26 14:22:19.952553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.952569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.382 [2024-04-26 14:22:19.952584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.952601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.952616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.952648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.952665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.952682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.952697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.952713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.952728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.952745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.952759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.952776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.952790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.952807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.952822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.952838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.952857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.952874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.952889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.952906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.952921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.952937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.952952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.952969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.952984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:70384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:70536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.382 [2024-04-26 14:22:19.953865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.382 [2024-04-26 14:22:19.953880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.953896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.383 [2024-04-26 14:22:19.953911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.953927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.383 [2024-04-26 14:22:19.953942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.953960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:70608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.383 [2024-04-26 14:22:19.953979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.953997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.383 [2024-04-26 14:22:19.954011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.954028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.383 [2024-04-26 14:22:19.954043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.954062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.383 [2024-04-26 14:22:19.954078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.954095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.383 [2024-04-26 14:22:19.954109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.954126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.383 [2024-04-26 14:22:19.954140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.954180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.954198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70656 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.954213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.954232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.954245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.954258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70664 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.954272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.954287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.954299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.954311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70672 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.954325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.954340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.954351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.954364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70680 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.954378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.954392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.954404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.954416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70688 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.954430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.954444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.954456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.954469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70696 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.954483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.954497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.954513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.954526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70704 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.954540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.954554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.954566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.954578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70712 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.954592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.954607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.954619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.954637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70720 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.954653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.954668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.954680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.954692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70728 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.954706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.954720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.954732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.954744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70736 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.954758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.954773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.954785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.954802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70744 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.954817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.954832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.954844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.954856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70752 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.954870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.954884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.954896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.954909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70760 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.954923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.954941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.954954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.954966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70768 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.954980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.954994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.955006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.955019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70776 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.955033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.955047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.955059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.955071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70784 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.955085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.955099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.955111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.955124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70792 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.955137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.955152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.955164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.955176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70800 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.955190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.955205] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.955217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.955234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70808 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.955248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.955262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.955274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.955287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70816 len:8 PRP1 0x0 PRP2 0x0 00:17:53.383 [2024-04-26 14:22:19.955301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.383 [2024-04-26 14:22:19.955315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.383 [2024-04-26 14:22:19.955327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.383 [2024-04-26 14:22:19.955340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70824 len:8 PRP1 0x0 PRP2 0x0 00:17:53.384 [2024-04-26 14:22:19.955357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:19.955372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.384 [2024-04-26 14:22:19.955384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.384 [2024-04-26 14:22:19.955397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70832 len:8 PRP1 0x0 PRP2 0x0 00:17:53.384 [2024-04-26 14:22:19.955411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:19.955425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.384 [2024-04-26 14:22:19.955437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.384 [2024-04-26 14:22:19.955450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70840 len:8 PRP1 0x0 PRP2 0x0 00:17:53.384 [2024-04-26 14:22:19.955464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:19.955478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.384 [2024-04-26 14:22:19.955490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.384 [2024-04-26 14:22:19.955503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70848 len:8 PRP1 0x0 PRP2 0x0 00:17:53.384 [2024-04-26 14:22:19.955516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:19.955531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.384 [2024-04-26 14:22:19.955542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.384 [2024-04-26 14:22:19.955555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70264 len:8 PRP1 0x0 PRP2 0x0 00:17:53.384 [2024-04-26 14:22:19.955568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:19.955626] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17967b0 was disconnected and freed. reset controller. 00:17:53.384 [2024-04-26 14:22:19.955658] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:53.384 [2024-04-26 14:22:19.955692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.384 [2024-04-26 14:22:19.955711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:19.955727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.384 [2024-04-26 14:22:19.955742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:19.955758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.384 [2024-04-26 14:22:19.955772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:19.955787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.384 [2024-04-26 14:22:19.955801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:19.955815] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:53.384 [2024-04-26 14:22:19.959890] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:53.384 [2024-04-26 14:22:19.959937] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a0e80 (9): Bad file descriptor 00:17:53.384 [2024-04-26 14:22:20.037143] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:53.384 [2024-04-26 14:22:23.624870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.384 [2024-04-26 14:22:23.624916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.624948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.384 [2024-04-26 14:22:23.624973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.624998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.384 [2024-04-26 14:22:23.625014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.384 [2024-04-26 14:22:23.625046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.384 [2024-04-26 14:22:23.625079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.384 [2024-04-26 14:22:23.625110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.384 [2024-04-26 14:22:23.625141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.384 [2024-04-26 14:22:23.625173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.384 [2024-04-26 14:22:23.625205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.384 [2024-04-26 14:22:23.625238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.384 [2024-04-26 14:22:23.625270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.384 [2024-04-26 14:22:23.625314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.384 [2024-04-26 14:22:23.625347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.384 [2024-04-26 14:22:23.625378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.384 [2024-04-26 14:22:23.625410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.384 [2024-04-26 14:22:23.625441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.384 [2024-04-26 14:22:23.625473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.384 [2024-04-26 14:22:23.625504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.384 [2024-04-26 14:22:23.625535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.384 [2024-04-26 14:22:23.625567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.384 [2024-04-26 14:22:23.625598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.384 [2024-04-26 14:22:23.625637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.384 [2024-04-26 14:22:23.625671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.384 [2024-04-26 14:22:23.625703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.384 [2024-04-26 14:22:23.625719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.625739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.625756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.625771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.625787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.625802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.625820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.625836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.625853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.625868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.625885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.625899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.625915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.625930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.625947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.625961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.625977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.625992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.385 [2024-04-26 14:22:23.626971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.385 [2024-04-26 14:22:23.626987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.627954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.386 [2024-04-26 14:22:23.627969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.628005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.386 [2024-04-26 14:22:23.628024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38528 len:8 PRP1 0x0 PRP2 0x0 00:17:53.386 [2024-04-26 14:22:23.628038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.628058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.386 [2024-04-26 14:22:23.628070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.386 [2024-04-26 14:22:23.628083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38536 len:8 PRP1 0x0 PRP2 0x0 00:17:53.386 [2024-04-26 14:22:23.628098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.386 [2024-04-26 14:22:23.628112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.386 [2024-04-26 14:22:23.628125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.387 [2024-04-26 14:22:23.628137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38544 len:8 PRP1 0x0 PRP2 0x0 00:17:53.387 [2024-04-26 14:22:23.628151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.387 [2024-04-26 14:22:23.628166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.387 [2024-04-26 14:22:23.628178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.387 [2024-04-26 14:22:23.628191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38552 len:8 PRP1 0x0 PRP2 0x0 00:17:53.387 [2024-04-26 14:22:23.628205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.387 [2024-04-26 14:22:23.628223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.387 [2024-04-26 14:22:23.628236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.387 [2024-04-26 14:22:23.628248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38560 len:8 PRP1 0x0 PRP2 0x0 00:17:53.387 [2024-04-26 14:22:23.628262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.387 [2024-04-26 14:22:23.628277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.387 [2024-04-26 14:22:23.628289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.387 [2024-04-26 14:22:23.628302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38568 len:8 PRP1 0x0 PRP2 0x0 00:17:53.387 [2024-04-26 14:22:23.628317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.387 [2024-04-26 14:22:23.628331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.387 [2024-04-26 14:22:23.628344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.387 [2024-04-26 14:22:23.628356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38576 len:8 PRP1 0x0 PRP2 0x0 00:17:53.387 [2024-04-26 14:22:23.628375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.387 [2024-04-26 14:22:23.628391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.387 [2024-04-26 14:22:23.628403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.387 [2024-04-26 14:22:23.628416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38584 len:8 PRP1 0x0 PRP2 0x0 00:17:53.387 [2024-04-26 14:22:23.628430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.387 [2024-04-26 14:22:23.628445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.387 [2024-04-26 14:22:23.628457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.387 [2024-04-26 14:22:23.628470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38592 len:8 PRP1 0x0 PRP2 0x0 00:17:53.387 [2024-04-26 14:22:23.628484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.387 [2024-04-26 14:22:23.628498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.387 [2024-04-26 14:22:23.628511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.387 [2024-04-26 14:22:23.628523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38600 len:8 PRP1 0x0 PRP2 0x0 00:17:53.387 [2024-04-26 14:22:23.628537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.387 [2024-04-26 14:22:23.628552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.387 [2024-04-26 14:22:23.628564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.387 [2024-04-26 14:22:23.628577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38608 len:8 PRP1 0x0 PRP2 0x0 00:17:53.387 [2024-04-26 14:22:23.628591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.387 [2024-04-26 14:22:23.628606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.387 [2024-04-26 14:22:23.628618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.387 [2024-04-26 14:22:23.628639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38616 len:8 PRP1 0x0 PRP2 0x0 00:17:53.387 [2024-04-26 14:22:23.628659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.387 [2024-04-26 14:22:23.628675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.387 [2024-04-26 14:22:23.628687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.387 [2024-04-26 14:22:23.628699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38624 len:8 PRP1 0x0 PRP2 0x0 00:17:53.387 [2024-04-26 14:22:23.628713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.387 [2024-04-26 14:22:23.628728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.387 [2024-04-26 14:22:23.628740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.387 [2024-04-26 14:22:23.628753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38632 len:8 PRP1 0x0 PRP2 0x0 00:17:53.387 [2024-04-26 14:22:23.628767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.387 [2024-04-26 14:22:23.628781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.387 [2024-04-26 14:22:23.628794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.387 [2024-04-26 14:22:23.628807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38640 len:8 PRP1 0x0 PRP2 0x0 00:17:53.387 [2024-04-26 14:22:23.628821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.387 [2024-04-26 14:22:23.628835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.387 [2024-04-26 14:22:23.628847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.387 [2024-04-26 14:22:23.628860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38648 len:8 PRP1 0x0 PRP2 0x0 00:17:53.387 [2024-04-26 14:22:23.628874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.387 [2024-04-26 14:22:23.628888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.387 [2024-04-26 14:22:23.628900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.387 [2024-04-26 14:22:23.628912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38656 len:8 PRP1 0x0 PRP2 0x0 00:17:53.387 [2024-04-26 14:22:23.628926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.387 [2024-04-26 14:22:23.628940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.387 [2024-04-26 14:22:23.628952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.387 [2024-04-26 14:22:23.628965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38664 len:8 PRP1 0x0 PRP2 0x0 00:17:53.387 [2024-04-26 14:22:23.628978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.387 [2024-04-26 14:22:23.628992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.387 [2024-04-26 14:22:23.629004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.387 [2024-04-26 14:22:23.629017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38672 len:8 PRP1 0x0 PRP2 0x0 00:17:53.387 [2024-04-26 14:22:23.629031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.387 [2024-04-26 14:22:23.629045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.387 [2024-04-26 14:22:23.629057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.387 [2024-04-26 14:22:23.629072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38680 len:8 PRP1 0x0 PRP2 0x0 00:17:53.387 [2024-04-26 14:22:23.629086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.387 [2024-04-26 14:22:23.629101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.387 [2024-04-26 14:22:23.629113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.387 [2024-04-26 14:22:23.629125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38688 len:8 PRP1 0x0 PRP2 0x0 00:17:53.387 [2024-04-26 14:22:23.629139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.387 [2024-04-26 14:22:23.629153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.387 [2024-04-26 14:22:23.629165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.387 [2024-04-26 14:22:23.629178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38696 len:8 PRP1 0x0 PRP2 0x0 00:17:53.387 [2024-04-26 14:22:23.629192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:23.629207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.388 [2024-04-26 14:22:23.629219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.388 [2024-04-26 14:22:23.629231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38704 len:8 PRP1 0x0 PRP2 0x0 00:17:53.388 [2024-04-26 14:22:23.629245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:23.629259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.388 [2024-04-26 14:22:23.629271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.388 [2024-04-26 14:22:23.629283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38712 len:8 PRP1 0x0 PRP2 0x0 00:17:53.388 [2024-04-26 14:22:23.629297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:23.629311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.388 [2024-04-26 14:22:23.629323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.388 [2024-04-26 14:22:23.629336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38720 len:8 PRP1 0x0 PRP2 0x0 00:17:53.388 [2024-04-26 14:22:23.629350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:23.629364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.388 [2024-04-26 14:22:23.629376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.388 [2024-04-26 14:22:23.629388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38728 len:8 PRP1 0x0 PRP2 0x0 00:17:53.388 [2024-04-26 14:22:23.629402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:23.629416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.388 [2024-04-26 14:22:23.629428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.388 [2024-04-26 14:22:23.629440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38736 len:8 PRP1 0x0 PRP2 0x0 00:17:53.388 [2024-04-26 14:22:23.629454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:23.629472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.388 [2024-04-26 14:22:23.629485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.388 [2024-04-26 14:22:23.629497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37832 len:8 PRP1 0x0 PRP2 0x0 00:17:53.388 [2024-04-26 14:22:23.629511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:23.629526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.388 [2024-04-26 14:22:23.629538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.388 [2024-04-26 14:22:23.629551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37840 len:8 PRP1 0x0 PRP2 0x0 00:17:53.388 [2024-04-26 14:22:23.629565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:23.629579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.388 [2024-04-26 14:22:23.629591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.388 [2024-04-26 14:22:23.629604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37848 len:8 PRP1 0x0 PRP2 0x0 00:17:53.388 [2024-04-26 14:22:23.629618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:23.629637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.388 [2024-04-26 14:22:23.629651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.388 [2024-04-26 14:22:23.629663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37856 len:8 PRP1 0x0 PRP2 0x0 00:17:53.388 [2024-04-26 14:22:23.629677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:23.629692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.388 [2024-04-26 14:22:23.629704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.388 [2024-04-26 14:22:23.629716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37864 len:8 PRP1 0x0 PRP2 0x0 00:17:53.388 [2024-04-26 14:22:23.629731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:23.629745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.388 [2024-04-26 14:22:23.629758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.388 [2024-04-26 14:22:23.629770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37872 len:8 PRP1 0x0 PRP2 0x0 00:17:53.388 [2024-04-26 14:22:23.629785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:23.629799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.388 [2024-04-26 14:22:23.629811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.388 [2024-04-26 14:22:23.629825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37880 len:8 PRP1 0x0 PRP2 0x0 00:17:53.388 [2024-04-26 14:22:23.629839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:23.629896] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x196a270 was disconnected and freed. reset controller. 00:17:53.388 [2024-04-26 14:22:23.629925] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:17:53.388 [2024-04-26 14:22:23.629962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.388 [2024-04-26 14:22:23.629985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:23.630004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.388 [2024-04-26 14:22:23.630019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:23.630034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.388 [2024-04-26 14:22:23.630048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:23.630065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.388 [2024-04-26 14:22:23.630079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:23.630094] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:53.388 [2024-04-26 14:22:23.630159] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a0e80 (9): Bad file descriptor 00:17:53.388 [2024-04-26 14:22:23.634130] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:53.388 [2024-04-26 14:22:23.671850] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:53.388 [2024-04-26 14:22:28.213907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.388 [2024-04-26 14:22:28.213951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:28.213982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.388 [2024-04-26 14:22:28.213999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:28.214018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.388 [2024-04-26 14:22:28.214033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:28.214050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.388 [2024-04-26 14:22:28.214065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:28.214081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.388 [2024-04-26 14:22:28.214097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:28.214113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.388 [2024-04-26 14:22:28.214128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.388 [2024-04-26 14:22:28.214145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.389 [2024-04-26 14:22:28.214974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.389 [2024-04-26 14:22:28.214991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.390 [2024-04-26 14:22:28.215233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.390 [2024-04-26 14:22:28.215264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.390 [2024-04-26 14:22:28.215296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.390 [2024-04-26 14:22:28.215327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.390 [2024-04-26 14:22:28.215359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.390 [2024-04-26 14:22:28.215390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.390 [2024-04-26 14:22:28.215429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.390 [2024-04-26 14:22:28.215461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.390 [2024-04-26 14:22:28.215492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.390 [2024-04-26 14:22:28.215525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.215981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.215998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.216020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.216037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.216053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.216070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.216085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.216102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.216116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.216133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.216147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.216164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.390 [2024-04-26 14:22:28.216179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.390 [2024-04-26 14:22:28.216196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.216975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.216990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.217006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.217021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.217043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.217063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.217083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:53.391 [2024-04-26 14:22:28.217099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.217136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.391 [2024-04-26 14:22:28.217155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60800 len:8 PRP1 0x0 PRP2 0x0 00:17:53.391 [2024-04-26 14:22:28.217169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.217189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.391 [2024-04-26 14:22:28.217202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.391 [2024-04-26 14:22:28.217214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60808 len:8 PRP1 0x0 PRP2 0x0 00:17:53.391 [2024-04-26 14:22:28.217228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.217243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.391 [2024-04-26 14:22:28.217256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.391 [2024-04-26 14:22:28.217268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60816 len:8 PRP1 0x0 PRP2 0x0 00:17:53.391 [2024-04-26 14:22:28.217282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.217297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.391 [2024-04-26 14:22:28.217309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.391 [2024-04-26 14:22:28.217321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60824 len:8 PRP1 0x0 PRP2 0x0 00:17:53.391 [2024-04-26 14:22:28.217335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.217350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.391 [2024-04-26 14:22:28.217361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.391 [2024-04-26 14:22:28.217374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60832 len:8 PRP1 0x0 PRP2 0x0 00:17:53.391 [2024-04-26 14:22:28.217387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.217402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.391 [2024-04-26 14:22:28.217414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.391 [2024-04-26 14:22:28.217426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60840 len:8 PRP1 0x0 PRP2 0x0 00:17:53.391 [2024-04-26 14:22:28.217440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.391 [2024-04-26 14:22:28.217454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.391 [2024-04-26 14:22:28.217467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.217479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60848 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.217493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.217507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.217526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.217540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60856 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.217554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.217568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.217580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.217597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60864 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.217612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.217626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.217646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.217658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60872 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.217672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.217686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.217699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.217711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60880 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.217725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.217740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.217752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.217765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60888 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.217779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.217793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.217805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.217818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60896 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.217832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.217846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.217858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.217870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60904 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.217884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.217899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.217911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.217923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60912 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.217937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.217955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.217972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.217985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60920 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.217999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.218014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.218026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.218039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60928 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.218053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.218067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.218079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.218092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60936 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.218105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.218120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.218132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.218144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60944 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.218158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.218172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.218184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.218197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60952 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.218210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.218225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.218236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.218249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60960 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.218263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.218277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.218289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.218301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60968 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.218315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.218329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.218341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.218354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60976 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.218371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.218386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.218403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.218416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60984 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.218430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.218444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.218456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.218468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60992 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.218482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.218496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.218508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.218520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61000 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.218535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.218549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.218561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.218573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61008 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.218587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.218601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.218613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.218625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61016 len:8 PRP1 0x0 PRP2 0x0 00:17:53.392 [2024-04-26 14:22:28.218645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.392 [2024-04-26 14:22:28.218661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.392 [2024-04-26 14:22:28.218673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.392 [2024-04-26 14:22:28.218685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61024 len:8 PRP1 0x0 PRP2 0x0 00:17:53.393 [2024-04-26 14:22:28.218699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.393 [2024-04-26 14:22:28.218713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.393 [2024-04-26 14:22:28.218725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.393 [2024-04-26 14:22:28.218738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61032 len:8 PRP1 0x0 PRP2 0x0 00:17:53.393 [2024-04-26 14:22:28.218752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.393 [2024-04-26 14:22:28.218766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:53.393 [2024-04-26 14:22:28.218781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:53.393 [2024-04-26 14:22:28.218794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61040 len:8 PRP1 0x0 PRP2 0x0 00:17:53.393 [2024-04-26 14:22:28.218808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.393 [2024-04-26 14:22:28.218865] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17ad370 was disconnected and freed. reset controller. 00:17:53.393 [2024-04-26 14:22:28.218890] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:17:53.393 [2024-04-26 14:22:28.218925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.393 [2024-04-26 14:22:28.218943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.393 [2024-04-26 14:22:28.218959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.393 [2024-04-26 14:22:28.218974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.393 [2024-04-26 14:22:28.218989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.393 [2024-04-26 14:22:28.219003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.393 [2024-04-26 14:22:28.219018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.393 [2024-04-26 14:22:28.219032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.393 [2024-04-26 14:22:28.219046] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:53.393 [2024-04-26 14:22:28.219103] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a0e80 (9): Bad file descriptor 00:17:53.393 [2024-04-26 14:22:28.223084] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:53.393 [2024-04-26 14:22:28.295716] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:53.393 00:17:53.393 Latency(us) 00:17:53.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.393 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:53.393 Verification LBA range: start 0x0 length 0x4000 00:17:53.393 NVMe0n1 : 15.00 7349.10 28.71 393.81 0.00 16498.68 618.95 17961.72 00:17:53.393 =================================================================================================================== 00:17:53.393 Total : 7349.10 28.71 393.81 0.00 16498.68 618.95 17961.72 00:17:53.393 Received shutdown signal, test time was about 15.000000 seconds 00:17:53.393 00:17:53.393 Latency(us) 00:17:53.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.393 =================================================================================================================== 00:17:53.393 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.393 14:22:34 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:53.393 14:22:34 -- host/failover.sh@65 -- # count=3 00:17:53.393 14:22:34 -- host/failover.sh@67 -- # (( count != 3 )) 00:17:53.393 14:22:34 -- host/failover.sh@73 -- # bdevperf_pid=3186979 00:17:53.393 14:22:34 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:53.393 14:22:34 -- host/failover.sh@75 -- # waitforlisten 3186979 /var/tmp/bdevperf.sock 00:17:53.393 14:22:34 -- common/autotest_common.sh@817 -- # '[' -z 3186979 ']' 00:17:53.393 14:22:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:53.393 14:22:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:53.393 14:22:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:53.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:53.393 14:22:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:53.393 14:22:34 -- common/autotest_common.sh@10 -- # set +x 00:17:53.393 14:22:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:53.393 14:22:34 -- common/autotest_common.sh@850 -- # return 0 00:17:53.393 14:22:34 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:53.393 [2024-04-26 14:22:34.677125] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:53.393 14:22:34 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:53.651 [2024-04-26 14:22:34.970018] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:53.651 14:22:34 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:53.909 NVMe0n1 00:17:53.909 14:22:35 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:54.474 00:17:54.474 14:22:35 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:54.732 00:17:54.732 14:22:36 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:54.732 14:22:36 -- host/failover.sh@82 -- # grep -q NVMe0 00:17:54.990 14:22:36 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:55.249 14:22:36 -- host/failover.sh@87 -- # sleep 3 00:17:58.537 14:22:39 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:58.537 14:22:39 -- host/failover.sh@88 -- # grep -q NVMe0 00:17:58.537 14:22:39 -- host/failover.sh@90 -- # run_test_pid=3187576 00:17:58.537 14:22:39 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:58.537 14:22:39 -- host/failover.sh@92 -- # wait 3187576 00:17:59.537 0 00:17:59.537 14:22:41 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:17:59.537 [2024-04-26 14:22:34.127325] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:17:59.537 [2024-04-26 14:22:34.127437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3186979 ] 00:17:59.537 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.537 [2024-04-26 14:22:34.188099] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.537 [2024-04-26 14:22:34.301998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.537 [2024-04-26 14:22:36.630776] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:59.537 [2024-04-26 14:22:36.630853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.537 [2024-04-26 14:22:36.630877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.537 [2024-04-26 14:22:36.630896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.537 [2024-04-26 14:22:36.630910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.537 [2024-04-26 14:22:36.630925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.537 [2024-04-26 14:22:36.630940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.537 [2024-04-26 14:22:36.630955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:59.537 [2024-04-26 14:22:36.630969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:59.537 [2024-04-26 14:22:36.630983] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:59.537 [2024-04-26 14:22:36.631047] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:59.537 [2024-04-26 14:22:36.631080] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17e8e80 (9): Bad file descriptor 00:17:59.537 [2024-04-26 14:22:36.643227] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:59.537 Running I/O for 1 seconds... 00:17:59.537 00:17:59.537 Latency(us) 00:17:59.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.537 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:59.537 Verification LBA range: start 0x0 length 0x4000 00:17:59.537 NVMe0n1 : 1.01 7279.91 28.44 0.00 0.00 17501.29 1747.63 16214.09 00:17:59.537 =================================================================================================================== 00:17:59.537 Total : 7279.91 28.44 0.00 0.00 17501.29 1747.63 16214.09 00:17:59.537 14:22:41 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:59.537 14:22:41 -- host/failover.sh@95 -- # grep -q NVMe0 00:18:00.102 14:22:41 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:00.102 14:22:41 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:00.102 14:22:41 -- host/failover.sh@99 -- # grep -q NVMe0 00:18:00.360 14:22:41 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:00.618 14:22:42 -- host/failover.sh@101 -- # sleep 3 00:18:03.897 14:22:45 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:03.897 14:22:45 -- host/failover.sh@103 -- # grep -q NVMe0 00:18:03.897 14:22:45 -- host/failover.sh@108 -- # killprocess 3186979 00:18:03.897 14:22:45 -- common/autotest_common.sh@936 -- # '[' -z 3186979 ']' 00:18:03.897 14:22:45 -- common/autotest_common.sh@940 -- # kill -0 3186979 00:18:03.897 14:22:45 -- common/autotest_common.sh@941 -- # uname 00:18:03.897 14:22:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:03.897 14:22:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3186979 00:18:03.897 14:22:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:03.897 14:22:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:03.897 14:22:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3186979' 00:18:03.897 killing process with pid 3186979 00:18:03.897 14:22:45 -- common/autotest_common.sh@955 -- # kill 3186979 00:18:03.897 14:22:45 -- common/autotest_common.sh@960 -- # wait 3186979 00:18:04.155 14:22:45 -- host/failover.sh@110 -- # sync 00:18:04.155 14:22:45 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:04.413 14:22:45 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:04.413 14:22:45 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:04.413 14:22:45 -- host/failover.sh@116 -- # nvmftestfini 00:18:04.413 14:22:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:04.413 14:22:45 -- nvmf/common.sh@117 -- # sync 00:18:04.413 14:22:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:04.413 14:22:45 -- nvmf/common.sh@120 -- # set +e 00:18:04.413 14:22:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:04.413 14:22:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:04.413 rmmod nvme_tcp 00:18:04.413 rmmod nvme_fabrics 00:18:04.671 rmmod nvme_keyring 00:18:04.671 14:22:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:04.671 14:22:45 -- nvmf/common.sh@124 -- # set -e 00:18:04.671 14:22:45 -- nvmf/common.sh@125 -- # return 0 00:18:04.671 14:22:45 -- nvmf/common.sh@478 -- # '[' -n 3185245 ']' 00:18:04.671 14:22:45 -- nvmf/common.sh@479 -- # killprocess 3185245 00:18:04.671 14:22:45 -- common/autotest_common.sh@936 -- # '[' -z 3185245 ']' 00:18:04.671 14:22:45 -- common/autotest_common.sh@940 -- # kill -0 3185245 00:18:04.671 14:22:45 -- common/autotest_common.sh@941 -- # uname 00:18:04.671 14:22:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:04.671 14:22:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3185245 00:18:04.671 14:22:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:04.671 14:22:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:04.671 14:22:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3185245' 00:18:04.671 killing process with pid 3185245 00:18:04.671 14:22:46 -- common/autotest_common.sh@955 -- # kill 3185245 00:18:04.671 14:22:46 -- common/autotest_common.sh@960 -- # wait 3185245 00:18:04.931 14:22:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:04.931 14:22:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:04.931 14:22:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:04.931 14:22:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:04.931 14:22:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:04.931 14:22:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.931 14:22:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.931 14:22:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.839 14:22:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:06.839 00:18:06.839 real 0m35.071s 00:18:06.839 user 2m5.289s 00:18:06.839 sys 0m5.522s 00:18:06.839 14:22:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:06.839 14:22:48 -- common/autotest_common.sh@10 -- # set +x 00:18:06.839 ************************************ 00:18:06.839 END TEST nvmf_failover 00:18:06.839 ************************************ 00:18:06.839 14:22:48 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:06.839 14:22:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:06.839 14:22:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:06.839 14:22:48 -- common/autotest_common.sh@10 -- # set +x 00:18:07.098 ************************************ 00:18:07.098 START TEST nvmf_discovery 00:18:07.098 ************************************ 00:18:07.098 14:22:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:07.098 * Looking for test storage... 00:18:07.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:07.098 14:22:48 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:07.098 14:22:48 -- nvmf/common.sh@7 -- # uname -s 00:18:07.098 14:22:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.098 14:22:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.098 14:22:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.098 14:22:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.098 14:22:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.098 14:22:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.098 14:22:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.098 14:22:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.098 14:22:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.098 14:22:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.098 14:22:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:07.098 14:22:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:07.098 14:22:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.098 14:22:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.098 14:22:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:07.098 14:22:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:07.098 14:22:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:07.098 14:22:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.098 14:22:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.098 14:22:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.098 14:22:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.099 14:22:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.099 14:22:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.099 14:22:48 -- paths/export.sh@5 -- # export PATH 00:18:07.099 14:22:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.099 14:22:48 -- nvmf/common.sh@47 -- # : 0 00:18:07.099 14:22:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:07.099 14:22:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:07.099 14:22:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:07.099 14:22:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.099 14:22:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.099 14:22:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:07.099 14:22:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:07.099 14:22:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:07.099 14:22:48 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:07.099 14:22:48 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:07.099 14:22:48 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:07.099 14:22:48 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:07.099 14:22:48 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:07.099 14:22:48 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:07.099 14:22:48 -- host/discovery.sh@25 -- # nvmftestinit 00:18:07.099 14:22:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:07.099 14:22:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.099 14:22:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:07.099 14:22:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:07.099 14:22:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:07.099 14:22:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.099 14:22:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.099 14:22:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.099 14:22:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:07.099 14:22:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:07.099 14:22:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:07.099 14:22:48 -- common/autotest_common.sh@10 -- # set +x 00:18:09.003 14:22:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:09.003 14:22:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:09.003 14:22:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:09.003 14:22:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:09.003 14:22:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:09.003 14:22:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:09.003 14:22:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:09.003 14:22:50 -- nvmf/common.sh@295 -- # net_devs=() 00:18:09.003 14:22:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:09.003 14:22:50 -- nvmf/common.sh@296 -- # e810=() 00:18:09.003 14:22:50 -- nvmf/common.sh@296 -- # local -ga e810 00:18:09.003 14:22:50 -- nvmf/common.sh@297 -- # x722=() 00:18:09.003 14:22:50 -- nvmf/common.sh@297 -- # local -ga x722 00:18:09.003 14:22:50 -- nvmf/common.sh@298 -- # mlx=() 00:18:09.003 14:22:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:09.003 14:22:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:09.003 14:22:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:09.003 14:22:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:09.003 14:22:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:09.003 14:22:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:09.003 14:22:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:09.003 14:22:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:09.003 14:22:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:09.003 14:22:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:09.003 14:22:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:09.003 14:22:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:09.003 14:22:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:09.003 14:22:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:09.003 14:22:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:09.003 14:22:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:09.003 14:22:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:09.003 14:22:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:09.003 14:22:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:09.003 14:22:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:09.003 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:09.003 14:22:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:09.003 14:22:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:09.003 14:22:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.003 14:22:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.003 14:22:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:09.003 14:22:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:09.003 14:22:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:09.003 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:09.003 14:22:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:09.003 14:22:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:09.003 14:22:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:09.003 14:22:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:09.003 14:22:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:09.003 14:22:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:09.003 14:22:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:09.003 14:22:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:09.003 14:22:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:09.003 14:22:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.003 14:22:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:09.003 14:22:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.003 14:22:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:09.003 Found net devices under 0000:08:00.0: cvl_0_0 00:18:09.003 14:22:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.003 14:22:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:09.003 14:22:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.003 14:22:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:09.003 14:22:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.003 14:22:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:09.003 Found net devices under 0000:08:00.1: cvl_0_1 00:18:09.003 14:22:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.003 14:22:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:09.003 14:22:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:09.003 14:22:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:09.003 14:22:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:09.003 14:22:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:09.003 14:22:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.003 14:22:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.003 14:22:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:09.003 14:22:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:09.003 14:22:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:09.003 14:22:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:09.003 14:22:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:09.003 14:22:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:09.003 14:22:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.003 14:22:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:09.003 14:22:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:09.003 14:22:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:09.003 14:22:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:09.003 14:22:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:09.003 14:22:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:09.003 14:22:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:09.003 14:22:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:09.003 14:22:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:09.003 14:22:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:09.003 14:22:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:09.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:18:09.003 00:18:09.003 --- 10.0.0.2 ping statistics --- 00:18:09.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.003 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:18:09.003 14:22:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:09.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:18:09.004 00:18:09.004 --- 10.0.0.1 ping statistics --- 00:18:09.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.004 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:18:09.004 14:22:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.004 14:22:50 -- nvmf/common.sh@411 -- # return 0 00:18:09.004 14:22:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:09.004 14:22:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.004 14:22:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:09.004 14:22:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:09.004 14:22:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.004 14:22:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:09.004 14:22:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:09.004 14:22:50 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:09.004 14:22:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:09.004 14:22:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:09.004 14:22:50 -- common/autotest_common.sh@10 -- # set +x 00:18:09.004 14:22:50 -- nvmf/common.sh@470 -- # nvmfpid=3189597 00:18:09.004 14:22:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:09.004 14:22:50 -- nvmf/common.sh@471 -- # waitforlisten 3189597 00:18:09.004 14:22:50 -- common/autotest_common.sh@817 -- # '[' -z 3189597 ']' 00:18:09.004 14:22:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.004 14:22:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:09.004 14:22:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.004 14:22:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:09.004 14:22:50 -- common/autotest_common.sh@10 -- # set +x 00:18:09.004 [2024-04-26 14:22:50.242870] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:18:09.004 [2024-04-26 14:22:50.242957] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.004 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.004 [2024-04-26 14:22:50.306552] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.004 [2024-04-26 14:22:50.420604] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.004 [2024-04-26 14:22:50.420678] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.004 [2024-04-26 14:22:50.420694] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.004 [2024-04-26 14:22:50.420707] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.004 [2024-04-26 14:22:50.420721] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.004 [2024-04-26 14:22:50.420759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.004 14:22:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:09.004 14:22:50 -- common/autotest_common.sh@850 -- # return 0 00:18:09.004 14:22:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:09.004 14:22:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:09.004 14:22:50 -- common/autotest_common.sh@10 -- # set +x 00:18:09.004 14:22:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.004 14:22:50 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:09.004 14:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.004 14:22:50 -- common/autotest_common.sh@10 -- # set +x 00:18:09.004 [2024-04-26 14:22:50.556609] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.004 14:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.004 14:22:50 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:18:09.004 14:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.004 14:22:50 -- common/autotest_common.sh@10 -- # set +x 00:18:09.004 [2024-04-26 14:22:50.564730] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:09.004 14:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.004 14:22:50 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:09.004 14:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.004 14:22:50 -- common/autotest_common.sh@10 -- # set +x 00:18:09.262 null0 00:18:09.262 14:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.262 14:22:50 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:09.262 14:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.262 14:22:50 -- common/autotest_common.sh@10 -- # set +x 00:18:09.262 null1 00:18:09.262 14:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.262 14:22:50 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:09.262 14:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.262 14:22:50 -- common/autotest_common.sh@10 -- # set +x 00:18:09.262 14:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.262 14:22:50 -- host/discovery.sh@45 -- # hostpid=3189621 00:18:09.262 14:22:50 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:09.262 14:22:50 -- host/discovery.sh@46 -- # waitforlisten 3189621 /tmp/host.sock 00:18:09.262 14:22:50 -- common/autotest_common.sh@817 -- # '[' -z 3189621 ']' 00:18:09.262 14:22:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:18:09.262 14:22:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:09.262 14:22:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:09.262 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:09.262 14:22:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:09.262 14:22:50 -- common/autotest_common.sh@10 -- # set +x 00:18:09.262 [2024-04-26 14:22:50.640619] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:18:09.262 [2024-04-26 14:22:50.640716] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3189621 ] 00:18:09.262 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.262 [2024-04-26 14:22:50.701103] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.262 [2024-04-26 14:22:50.818612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.520 14:22:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:09.520 14:22:50 -- common/autotest_common.sh@850 -- # return 0 00:18:09.520 14:22:50 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:09.520 14:22:50 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:09.520 14:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.520 14:22:50 -- common/autotest_common.sh@10 -- # set +x 00:18:09.520 14:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.520 14:22:50 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:09.520 14:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.521 14:22:50 -- common/autotest_common.sh@10 -- # set +x 00:18:09.521 14:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.521 14:22:50 -- host/discovery.sh@72 -- # notify_id=0 00:18:09.521 14:22:50 -- host/discovery.sh@83 -- # get_subsystem_names 00:18:09.521 14:22:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:09.521 14:22:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:09.521 14:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.521 14:22:50 -- common/autotest_common.sh@10 -- # set +x 00:18:09.521 14:22:50 -- host/discovery.sh@59 -- # sort 00:18:09.521 14:22:50 -- host/discovery.sh@59 -- # xargs 00:18:09.521 14:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.521 14:22:50 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:09.521 14:22:50 -- host/discovery.sh@84 -- # get_bdev_list 00:18:09.521 14:22:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:09.521 14:22:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:09.521 14:22:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.521 14:22:50 -- common/autotest_common.sh@10 -- # set +x 00:18:09.521 14:22:50 -- host/discovery.sh@55 -- # sort 00:18:09.521 14:22:50 -- host/discovery.sh@55 -- # xargs 00:18:09.521 14:22:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.521 14:22:51 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:09.521 14:22:51 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:09.521 14:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.521 14:22:51 -- common/autotest_common.sh@10 -- # set +x 00:18:09.521 14:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.521 14:22:51 -- host/discovery.sh@87 -- # get_subsystem_names 00:18:09.521 14:22:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:09.521 14:22:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:09.521 14:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.521 14:22:51 -- common/autotest_common.sh@10 -- # set +x 00:18:09.521 14:22:51 -- host/discovery.sh@59 -- # sort 00:18:09.521 14:22:51 -- host/discovery.sh@59 -- # xargs 00:18:09.521 14:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.521 14:22:51 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:09.521 14:22:51 -- host/discovery.sh@88 -- # get_bdev_list 00:18:09.779 14:22:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:09.779 14:22:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:09.779 14:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.779 14:22:51 -- common/autotest_common.sh@10 -- # set +x 00:18:09.779 14:22:51 -- host/discovery.sh@55 -- # sort 00:18:09.779 14:22:51 -- host/discovery.sh@55 -- # xargs 00:18:09.779 14:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.779 14:22:51 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:09.779 14:22:51 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:09.779 14:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.779 14:22:51 -- common/autotest_common.sh@10 -- # set +x 00:18:09.779 14:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.779 14:22:51 -- host/discovery.sh@91 -- # get_subsystem_names 00:18:09.779 14:22:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:09.779 14:22:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:09.779 14:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.779 14:22:51 -- common/autotest_common.sh@10 -- # set +x 00:18:09.779 14:22:51 -- host/discovery.sh@59 -- # sort 00:18:09.779 14:22:51 -- host/discovery.sh@59 -- # xargs 00:18:09.779 14:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.779 14:22:51 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:09.779 14:22:51 -- host/discovery.sh@92 -- # get_bdev_list 00:18:09.779 14:22:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:09.779 14:22:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:09.779 14:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.779 14:22:51 -- common/autotest_common.sh@10 -- # set +x 00:18:09.779 14:22:51 -- host/discovery.sh@55 -- # sort 00:18:09.779 14:22:51 -- host/discovery.sh@55 -- # xargs 00:18:09.779 14:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.779 14:22:51 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:09.779 14:22:51 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:09.779 14:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.779 14:22:51 -- common/autotest_common.sh@10 -- # set +x 00:18:09.779 [2024-04-26 14:22:51.238511] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.779 14:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.779 14:22:51 -- host/discovery.sh@97 -- # get_subsystem_names 00:18:09.779 14:22:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:09.779 14:22:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:09.779 14:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.779 14:22:51 -- common/autotest_common.sh@10 -- # set +x 00:18:09.779 14:22:51 -- host/discovery.sh@59 -- # sort 00:18:09.779 14:22:51 -- host/discovery.sh@59 -- # xargs 00:18:09.779 14:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.779 14:22:51 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:09.779 14:22:51 -- host/discovery.sh@98 -- # get_bdev_list 00:18:09.779 14:22:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:09.779 14:22:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:09.779 14:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.779 14:22:51 -- common/autotest_common.sh@10 -- # set +x 00:18:09.779 14:22:51 -- host/discovery.sh@55 -- # sort 00:18:09.779 14:22:51 -- host/discovery.sh@55 -- # xargs 00:18:09.779 14:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.779 14:22:51 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:09.779 14:22:51 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:09.779 14:22:51 -- host/discovery.sh@79 -- # expected_count=0 00:18:09.779 14:22:51 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:09.779 14:22:51 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:09.779 14:22:51 -- common/autotest_common.sh@901 -- # local max=10 00:18:09.779 14:22:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:09.779 14:22:51 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:09.779 14:22:51 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:09.779 14:22:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:09.779 14:22:51 -- host/discovery.sh@74 -- # jq '. | length' 00:18:09.779 14:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.779 14:22:51 -- common/autotest_common.sh@10 -- # set +x 00:18:09.779 14:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:10.037 14:22:51 -- host/discovery.sh@74 -- # notification_count=0 00:18:10.037 14:22:51 -- host/discovery.sh@75 -- # notify_id=0 00:18:10.037 14:22:51 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:10.037 14:22:51 -- common/autotest_common.sh@904 -- # return 0 00:18:10.037 14:22:51 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:10.037 14:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:10.037 14:22:51 -- common/autotest_common.sh@10 -- # set +x 00:18:10.037 14:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:10.037 14:22:51 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:10.037 14:22:51 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:10.037 14:22:51 -- common/autotest_common.sh@901 -- # local max=10 00:18:10.037 14:22:51 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:10.037 14:22:51 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:10.037 14:22:51 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:10.037 14:22:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:10.037 14:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:10.037 14:22:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:10.037 14:22:51 -- common/autotest_common.sh@10 -- # set +x 00:18:10.037 14:22:51 -- host/discovery.sh@59 -- # sort 00:18:10.037 14:22:51 -- host/discovery.sh@59 -- # xargs 00:18:10.037 14:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:10.037 14:22:51 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:18:10.037 14:22:51 -- common/autotest_common.sh@906 -- # sleep 1 00:18:10.604 [2024-04-26 14:22:51.995808] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:10.604 [2024-04-26 14:22:51.995850] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:10.604 [2024-04-26 14:22:51.995876] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:10.604 [2024-04-26 14:22:52.083165] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:10.861 [2024-04-26 14:22:52.267173] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:10.861 [2024-04-26 14:22:52.267202] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:11.120 14:22:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:11.120 14:22:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:11.120 14:22:52 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:11.120 14:22:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:11.120 14:22:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:11.120 14:22:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.120 14:22:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.120 14:22:52 -- host/discovery.sh@59 -- # sort 00:18:11.120 14:22:52 -- host/discovery.sh@59 -- # xargs 00:18:11.120 14:22:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.120 14:22:52 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.120 14:22:52 -- common/autotest_common.sh@904 -- # return 0 00:18:11.120 14:22:52 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:11.120 14:22:52 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:11.120 14:22:52 -- common/autotest_common.sh@901 -- # local max=10 00:18:11.120 14:22:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:11.120 14:22:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:11.120 14:22:52 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:11.120 14:22:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:11.120 14:22:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:11.120 14:22:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.120 14:22:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.120 14:22:52 -- host/discovery.sh@55 -- # sort 00:18:11.120 14:22:52 -- host/discovery.sh@55 -- # xargs 00:18:11.120 14:22:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.120 14:22:52 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:11.120 14:22:52 -- common/autotest_common.sh@904 -- # return 0 00:18:11.120 14:22:52 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:11.120 14:22:52 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:11.120 14:22:52 -- common/autotest_common.sh@901 -- # local max=10 00:18:11.120 14:22:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:11.120 14:22:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:11.120 14:22:52 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:11.120 14:22:52 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:11.120 14:22:52 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:11.120 14:22:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.120 14:22:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.120 14:22:52 -- host/discovery.sh@63 -- # sort -n 00:18:11.120 14:22:52 -- host/discovery.sh@63 -- # xargs 00:18:11.120 14:22:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.120 14:22:52 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:18:11.120 14:22:52 -- common/autotest_common.sh@904 -- # return 0 00:18:11.120 14:22:52 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:11.120 14:22:52 -- host/discovery.sh@79 -- # expected_count=1 00:18:11.120 14:22:52 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:11.120 14:22:52 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:11.120 14:22:52 -- common/autotest_common.sh@901 -- # local max=10 00:18:11.120 14:22:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:11.120 14:22:52 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:11.120 14:22:52 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:11.120 14:22:52 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:11.120 14:22:52 -- host/discovery.sh@74 -- # jq '. | length' 00:18:11.120 14:22:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.120 14:22:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.120 14:22:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.120 14:22:52 -- host/discovery.sh@74 -- # notification_count=1 00:18:11.120 14:22:52 -- host/discovery.sh@75 -- # notify_id=1 00:18:11.120 14:22:52 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:11.120 14:22:52 -- common/autotest_common.sh@904 -- # return 0 00:18:11.120 14:22:52 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:11.120 14:22:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.120 14:22:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.120 14:22:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.120 14:22:52 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:11.120 14:22:52 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:11.120 14:22:52 -- common/autotest_common.sh@901 -- # local max=10 00:18:11.120 14:22:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:11.120 14:22:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:11.120 14:22:52 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:11.120 14:22:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:11.120 14:22:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:11.120 14:22:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.120 14:22:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.120 14:22:52 -- host/discovery.sh@55 -- # sort 00:18:11.120 14:22:52 -- host/discovery.sh@55 -- # xargs 00:18:11.120 14:22:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.120 14:22:52 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:11.120 14:22:52 -- common/autotest_common.sh@904 -- # return 0 00:18:11.120 14:22:52 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:11.121 14:22:52 -- host/discovery.sh@79 -- # expected_count=1 00:18:11.121 14:22:52 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:11.121 14:22:52 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:11.121 14:22:52 -- common/autotest_common.sh@901 -- # local max=10 00:18:11.121 14:22:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:11.121 14:22:52 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:11.121 14:22:52 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:11.121 14:22:52 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:11.121 14:22:52 -- host/discovery.sh@74 -- # jq '. | length' 00:18:11.121 14:22:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.121 14:22:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.121 14:22:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.379 14:22:52 -- host/discovery.sh@74 -- # notification_count=1 00:18:11.379 14:22:52 -- host/discovery.sh@75 -- # notify_id=2 00:18:11.379 14:22:52 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:11.379 14:22:52 -- common/autotest_common.sh@904 -- # return 0 00:18:11.379 14:22:52 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:18:11.379 14:22:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.379 14:22:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.379 [2024-04-26 14:22:52.702917] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:11.379 [2024-04-26 14:22:52.703285] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:11.379 [2024-04-26 14:22:52.703325] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:11.379 14:22:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.379 14:22:52 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:11.379 14:22:52 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:11.379 14:22:52 -- common/autotest_common.sh@901 -- # local max=10 00:18:11.379 14:22:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:11.379 14:22:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:11.379 14:22:52 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:11.379 14:22:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:11.379 14:22:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.379 14:22:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:11.379 14:22:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.379 14:22:52 -- host/discovery.sh@59 -- # sort 00:18:11.379 14:22:52 -- host/discovery.sh@59 -- # xargs 00:18:11.379 14:22:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.379 14:22:52 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.379 14:22:52 -- common/autotest_common.sh@904 -- # return 0 00:18:11.379 14:22:52 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:11.379 14:22:52 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:11.379 14:22:52 -- common/autotest_common.sh@901 -- # local max=10 00:18:11.379 14:22:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:11.379 14:22:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:11.379 14:22:52 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:11.379 14:22:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:11.379 14:22:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:11.379 14:22:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.379 14:22:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.379 14:22:52 -- host/discovery.sh@55 -- # sort 00:18:11.379 14:22:52 -- host/discovery.sh@55 -- # xargs 00:18:11.379 14:22:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.379 [2024-04-26 14:22:52.790030] bdev_nvme.c:6843:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:18:11.379 14:22:52 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:11.379 14:22:52 -- common/autotest_common.sh@904 -- # return 0 00:18:11.379 14:22:52 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:11.379 14:22:52 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:11.379 14:22:52 -- common/autotest_common.sh@901 -- # local max=10 00:18:11.379 14:22:52 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:11.379 14:22:52 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:11.379 14:22:52 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:11.379 14:22:52 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:11.379 14:22:52 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:11.379 14:22:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.379 14:22:52 -- common/autotest_common.sh@10 -- # set +x 00:18:11.379 14:22:52 -- host/discovery.sh@63 -- # sort -n 00:18:11.379 14:22:52 -- host/discovery.sh@63 -- # xargs 00:18:11.379 14:22:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.379 14:22:52 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:18:11.380 14:22:52 -- common/autotest_common.sh@906 -- # sleep 1 00:18:11.638 [2024-04-26 14:22:53.089400] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:11.638 [2024-04-26 14:22:53.089438] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:11.638 [2024-04-26 14:22:53.089450] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:12.576 14:22:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:12.576 14:22:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:12.576 14:22:53 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:12.576 14:22:53 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:12.576 14:22:53 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:12.576 14:22:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.576 14:22:53 -- common/autotest_common.sh@10 -- # set +x 00:18:12.576 14:22:53 -- host/discovery.sh@63 -- # sort -n 00:18:12.576 14:22:53 -- host/discovery.sh@63 -- # xargs 00:18:12.576 14:22:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.576 14:22:53 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:12.576 14:22:53 -- common/autotest_common.sh@904 -- # return 0 00:18:12.576 14:22:53 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:12.576 14:22:53 -- host/discovery.sh@79 -- # expected_count=0 00:18:12.576 14:22:53 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:12.576 14:22:53 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:12.576 14:22:53 -- common/autotest_common.sh@901 -- # local max=10 00:18:12.576 14:22:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:12.576 14:22:53 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:12.576 14:22:53 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:12.576 14:22:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:12.576 14:22:53 -- host/discovery.sh@74 -- # jq '. | length' 00:18:12.576 14:22:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.576 14:22:53 -- common/autotest_common.sh@10 -- # set +x 00:18:12.576 14:22:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.576 14:22:53 -- host/discovery.sh@74 -- # notification_count=0 00:18:12.576 14:22:53 -- host/discovery.sh@75 -- # notify_id=2 00:18:12.576 14:22:53 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:12.576 14:22:53 -- common/autotest_common.sh@904 -- # return 0 00:18:12.576 14:22:53 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:12.576 14:22:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.576 14:22:53 -- common/autotest_common.sh@10 -- # set +x 00:18:12.576 [2024-04-26 14:22:53.939205] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:12.576 [2024-04-26 14:22:53.939250] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:12.576 14:22:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.576 14:22:53 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:12.576 14:22:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:12.576 14:22:53 -- common/autotest_common.sh@901 -- # local max=10 00:18:12.576 14:22:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:12.576 14:22:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:12.576 14:22:53 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:12.576 14:22:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:12.576 14:22:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:12.576 14:22:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.576 14:22:53 -- common/autotest_common.sh@10 -- # set +x 00:18:12.576 14:22:53 -- host/discovery.sh@59 -- # sort 00:18:12.576 14:22:53 -- host/discovery.sh@59 -- # xargs 00:18:12.576 [2024-04-26 14:22:53.948479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.576 [2024-04-26 14:22:53.948516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.576 [2024-04-26 14:22:53.948534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.576 [2024-04-26 14:22:53.948554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.576 [2024-04-26 14:22:53.948569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.576 [2024-04-26 14:22:53.948585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.576 [2024-04-26 14:22:53.948600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:12.576 [2024-04-26 14:22:53.948615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:12.576 [2024-04-26 14:22:53.948629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5b2f0 is same with the state(5) to be set 00:18:12.576 14:22:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.576 [2024-04-26 14:22:53.958487] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5b2f0 (9): Bad file descriptor 00:18:12.576 [2024-04-26 14:22:53.968534] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:12.576 [2024-04-26 14:22:53.968778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.576 [2024-04-26 14:22:53.968941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.576 [2024-04-26 14:22:53.968971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5b2f0 with addr=10.0.0.2, port=4420 00:18:12.576 [2024-04-26 14:22:53.968989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5b2f0 is same with the state(5) to be set 00:18:12.576 [2024-04-26 14:22:53.969017] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5b2f0 (9): Bad file descriptor 00:18:12.576 [2024-04-26 14:22:53.969040] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:12.576 [2024-04-26 14:22:53.969062] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:12.576 [2024-04-26 14:22:53.969079] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:12.576 [2024-04-26 14:22:53.969103] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.576 [2024-04-26 14:22:53.978618] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:12.576 [2024-04-26 14:22:53.978797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.576 [2024-04-26 14:22:53.978945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.576 [2024-04-26 14:22:53.978972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5b2f0 with addr=10.0.0.2, port=4420 00:18:12.576 [2024-04-26 14:22:53.978989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5b2f0 is same with the state(5) to be set 00:18:12.576 [2024-04-26 14:22:53.979012] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5b2f0 (9): Bad file descriptor 00:18:12.576 [2024-04-26 14:22:53.979047] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:12.576 [2024-04-26 14:22:53.979064] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:12.576 [2024-04-26 14:22:53.979079] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:12.576 [2024-04-26 14:22:53.979100] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.576 14:22:53 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.577 14:22:53 -- common/autotest_common.sh@904 -- # return 0 00:18:12.577 14:22:53 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:12.577 14:22:53 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:12.577 14:22:53 -- common/autotest_common.sh@901 -- # local max=10 00:18:12.577 14:22:53 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:12.577 14:22:53 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:12.577 14:22:53 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:12.577 [2024-04-26 14:22:53.988711] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:12.577 14:22:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:12.577 14:22:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:12.577 14:22:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.577 [2024-04-26 14:22:53.989770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.577 14:22:53 -- common/autotest_common.sh@10 -- # set +x 00:18:12.577 14:22:53 -- host/discovery.sh@55 -- # sort 00:18:12.577 [2024-04-26 14:22:53.989937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.577 [2024-04-26 14:22:53.989968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5b2f0 with addr=10.0.0.2, port=4420 00:18:12.577 [2024-04-26 14:22:53.989985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5b2f0 is same with the state(5) to be set 00:18:12.577 [2024-04-26 14:22:53.990013] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5b2f0 (9): Bad file descriptor 00:18:12.577 [2024-04-26 14:22:53.990048] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:12.577 14:22:53 -- host/discovery.sh@55 -- # xargs 00:18:12.577 [2024-04-26 14:22:53.990067] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:12.577 [2024-04-26 14:22:53.990087] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:12.577 [2024-04-26 14:22:53.990108] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.577 [2024-04-26 14:22:53.998794] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:12.577 [2024-04-26 14:22:53.999528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.577 [2024-04-26 14:22:53.999699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.577 [2024-04-26 14:22:53.999727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5b2f0 with addr=10.0.0.2, port=4420 00:18:12.577 [2024-04-26 14:22:53.999745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5b2f0 is same with the state(5) to be set 00:18:12.577 [2024-04-26 14:22:53.999769] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5b2f0 (9): Bad file descriptor 00:18:12.577 [2024-04-26 14:22:53.999829] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:12.577 [2024-04-26 14:22:53.999849] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:12.577 [2024-04-26 14:22:53.999864] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:12.577 [2024-04-26 14:22:53.999885] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.577 [2024-04-26 14:22:54.008874] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:12.577 [2024-04-26 14:22:54.009030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.577 [2024-04-26 14:22:54.009169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.577 [2024-04-26 14:22:54.009195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5b2f0 with addr=10.0.0.2, port=4420 00:18:12.577 [2024-04-26 14:22:54.009212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5b2f0 is same with the state(5) to be set 00:18:12.577 [2024-04-26 14:22:54.009238] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5b2f0 (9): Bad file descriptor 00:18:12.577 [2024-04-26 14:22:54.009259] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:12.577 [2024-04-26 14:22:54.009273] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:12.577 [2024-04-26 14:22:54.009287] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:12.577 [2024-04-26 14:22:54.009308] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.577 [2024-04-26 14:22:54.018950] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:12.577 [2024-04-26 14:22:54.019076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.577 [2024-04-26 14:22:54.019197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.577 [2024-04-26 14:22:54.019224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5b2f0 with addr=10.0.0.2, port=4420 00:18:12.577 [2024-04-26 14:22:54.019241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5b2f0 is same with the state(5) to be set 00:18:12.577 [2024-04-26 14:22:54.019265] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5b2f0 (9): Bad file descriptor 00:18:12.577 [2024-04-26 14:22:54.019286] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:12.577 [2024-04-26 14:22:54.019301] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:12.577 [2024-04-26 14:22:54.019315] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:12.577 [2024-04-26 14:22:54.019336] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.577 14:22:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.577 [2024-04-26 14:22:54.029026] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:12.577 [2024-04-26 14:22:54.029198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.577 [2024-04-26 14:22:54.029315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.577 [2024-04-26 14:22:54.029341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5b2f0 with addr=10.0.0.2, port=4420 00:18:12.577 [2024-04-26 14:22:54.029358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5b2f0 is same with the state(5) to be set 00:18:12.577 [2024-04-26 14:22:54.029381] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5b2f0 (9): Bad file descriptor 00:18:12.577 [2024-04-26 14:22:54.029402] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:12.577 [2024-04-26 14:22:54.029417] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:12.577 [2024-04-26 14:22:54.029431] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:12.577 [2024-04-26 14:22:54.029451] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.577 14:22:54 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:12.577 14:22:54 -- common/autotest_common.sh@904 -- # return 0 00:18:12.577 14:22:54 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:12.577 14:22:54 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:12.577 14:22:54 -- common/autotest_common.sh@901 -- # local max=10 00:18:12.577 14:22:54 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:12.577 14:22:54 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:12.577 14:22:54 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:12.577 14:22:54 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:12.577 14:22:54 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:12.577 14:22:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.577 14:22:54 -- common/autotest_common.sh@10 -- # set +x 00:18:12.577 14:22:54 -- host/discovery.sh@63 -- # sort -n 00:18:12.577 14:22:54 -- host/discovery.sh@63 -- # xargs 00:18:12.577 [2024-04-26 14:22:54.039115] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:12.577 [2024-04-26 14:22:54.039299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.577 [2024-04-26 14:22:54.039418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.577 [2024-04-26 14:22:54.039444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5b2f0 with addr=10.0.0.2, port=4420 00:18:12.577 [2024-04-26 14:22:54.039461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5b2f0 is same with the state(5) to be set 00:18:12.577 [2024-04-26 14:22:54.039484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5b2f0 (9): Bad file descriptor 00:18:12.577 [2024-04-26 14:22:54.039504] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:12.577 [2024-04-26 14:22:54.039519] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:12.577 [2024-04-26 14:22:54.039533] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:12.577 [2024-04-26 14:22:54.039553] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.577 14:22:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.577 [2024-04-26 14:22:54.049198] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:12.577 [2024-04-26 14:22:54.049351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.577 [2024-04-26 14:22:54.049464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.577 [2024-04-26 14:22:54.049498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5b2f0 with addr=10.0.0.2, port=4420 00:18:12.577 [2024-04-26 14:22:54.049516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5b2f0 is same with the state(5) to be set 00:18:12.577 [2024-04-26 14:22:54.049539] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5b2f0 (9): Bad file descriptor 00:18:12.577 [2024-04-26 14:22:54.049559] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:12.577 [2024-04-26 14:22:54.049574] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:12.577 [2024-04-26 14:22:54.049588] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:12.577 [2024-04-26 14:22:54.049608] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.577 [2024-04-26 14:22:54.059274] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:12.577 [2024-04-26 14:22:54.059449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.577 [2024-04-26 14:22:54.059584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:12.577 [2024-04-26 14:22:54.059609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5b2f0 with addr=10.0.0.2, port=4420 00:18:12.578 [2024-04-26 14:22:54.059626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5b2f0 is same with the state(5) to be set 00:18:12.578 [2024-04-26 14:22:54.059657] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5b2f0 (9): Bad file descriptor 00:18:12.578 [2024-04-26 14:22:54.059678] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:12.578 [2024-04-26 14:22:54.059693] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:12.578 [2024-04-26 14:22:54.059706] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:12.578 [2024-04-26 14:22:54.059726] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:12.578 [2024-04-26 14:22:54.065171] bdev_nvme.c:6706:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:18:12.578 [2024-04-26 14:22:54.065205] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:12.578 14:22:54 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:18:12.578 14:22:54 -- common/autotest_common.sh@906 -- # sleep 1 00:18:13.523 14:22:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:13.523 14:22:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:13.523 14:22:55 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:18:13.523 14:22:55 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:13.523 14:22:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.523 14:22:55 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:13.523 14:22:55 -- common/autotest_common.sh@10 -- # set +x 00:18:13.523 14:22:55 -- host/discovery.sh@63 -- # sort -n 00:18:13.523 14:22:55 -- host/discovery.sh@63 -- # xargs 00:18:13.781 14:22:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.781 14:22:55 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:18:13.781 14:22:55 -- common/autotest_common.sh@904 -- # return 0 00:18:13.781 14:22:55 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:13.781 14:22:55 -- host/discovery.sh@79 -- # expected_count=0 00:18:13.781 14:22:55 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:13.781 14:22:55 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:13.781 14:22:55 -- common/autotest_common.sh@901 -- # local max=10 00:18:13.781 14:22:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:13.781 14:22:55 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:13.781 14:22:55 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:13.781 14:22:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:13.782 14:22:55 -- host/discovery.sh@74 -- # jq '. | length' 00:18:13.782 14:22:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.782 14:22:55 -- common/autotest_common.sh@10 -- # set +x 00:18:13.782 14:22:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.782 14:22:55 -- host/discovery.sh@74 -- # notification_count=0 00:18:13.782 14:22:55 -- host/discovery.sh@75 -- # notify_id=2 00:18:13.782 14:22:55 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:13.782 14:22:55 -- common/autotest_common.sh@904 -- # return 0 00:18:13.782 14:22:55 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:13.782 14:22:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.782 14:22:55 -- common/autotest_common.sh@10 -- # set +x 00:18:13.782 14:22:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.782 14:22:55 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:13.782 14:22:55 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:13.782 14:22:55 -- common/autotest_common.sh@901 -- # local max=10 00:18:13.782 14:22:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:13.782 14:22:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:13.782 14:22:55 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:18:13.782 14:22:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:13.782 14:22:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.782 14:22:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:13.782 14:22:55 -- common/autotest_common.sh@10 -- # set +x 00:18:13.782 14:22:55 -- host/discovery.sh@59 -- # sort 00:18:13.782 14:22:55 -- host/discovery.sh@59 -- # xargs 00:18:13.782 14:22:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.782 14:22:55 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:18:13.782 14:22:55 -- common/autotest_common.sh@904 -- # return 0 00:18:13.782 14:22:55 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:13.782 14:22:55 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:13.782 14:22:55 -- common/autotest_common.sh@901 -- # local max=10 00:18:13.782 14:22:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:13.782 14:22:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:13.782 14:22:55 -- common/autotest_common.sh@903 -- # get_bdev_list 00:18:13.782 14:22:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:13.782 14:22:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.782 14:22:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:13.782 14:22:55 -- common/autotest_common.sh@10 -- # set +x 00:18:13.782 14:22:55 -- host/discovery.sh@55 -- # sort 00:18:13.782 14:22:55 -- host/discovery.sh@55 -- # xargs 00:18:13.782 14:22:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.782 14:22:55 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:18:13.782 14:22:55 -- common/autotest_common.sh@904 -- # return 0 00:18:13.782 14:22:55 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:13.782 14:22:55 -- host/discovery.sh@79 -- # expected_count=2 00:18:13.782 14:22:55 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:13.782 14:22:55 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:13.782 14:22:55 -- common/autotest_common.sh@901 -- # local max=10 00:18:13.782 14:22:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:18:13.782 14:22:55 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:13.782 14:22:55 -- common/autotest_common.sh@903 -- # get_notification_count 00:18:13.782 14:22:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:13.782 14:22:55 -- host/discovery.sh@74 -- # jq '. | length' 00:18:13.782 14:22:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.782 14:22:55 -- common/autotest_common.sh@10 -- # set +x 00:18:13.782 14:22:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.782 14:22:55 -- host/discovery.sh@74 -- # notification_count=2 00:18:13.782 14:22:55 -- host/discovery.sh@75 -- # notify_id=4 00:18:13.782 14:22:55 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:18:13.782 14:22:55 -- common/autotest_common.sh@904 -- # return 0 00:18:13.782 14:22:55 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:13.782 14:22:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.782 14:22:55 -- common/autotest_common.sh@10 -- # set +x 00:18:15.157 [2024-04-26 14:22:56.367343] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:15.157 [2024-04-26 14:22:56.367381] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:15.157 [2024-04-26 14:22:56.367405] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:15.157 [2024-04-26 14:22:56.494833] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:18:15.157 [2024-04-26 14:22:56.559878] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:15.157 [2024-04-26 14:22:56.559937] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:15.157 14:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.157 14:22:56 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:15.157 14:22:56 -- common/autotest_common.sh@638 -- # local es=0 00:18:15.157 14:22:56 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:15.157 14:22:56 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:15.157 14:22:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:15.157 14:22:56 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:15.157 14:22:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:15.157 14:22:56 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:15.157 14:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.157 14:22:56 -- common/autotest_common.sh@10 -- # set +x 00:18:15.157 request: 00:18:15.157 { 00:18:15.157 "name": "nvme", 00:18:15.157 "trtype": "tcp", 00:18:15.157 "traddr": "10.0.0.2", 00:18:15.157 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:15.157 "adrfam": "ipv4", 00:18:15.157 "trsvcid": "8009", 00:18:15.157 "wait_for_attach": true, 00:18:15.157 "method": "bdev_nvme_start_discovery", 00:18:15.157 "req_id": 1 00:18:15.157 } 00:18:15.157 Got JSON-RPC error response 00:18:15.157 response: 00:18:15.157 { 00:18:15.157 "code": -17, 00:18:15.157 "message": "File exists" 00:18:15.157 } 00:18:15.157 14:22:56 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:15.157 14:22:56 -- common/autotest_common.sh@641 -- # es=1 00:18:15.157 14:22:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:15.157 14:22:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:15.157 14:22:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:15.157 14:22:56 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:15.157 14:22:56 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:15.157 14:22:56 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:15.157 14:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.157 14:22:56 -- common/autotest_common.sh@10 -- # set +x 00:18:15.157 14:22:56 -- host/discovery.sh@67 -- # sort 00:18:15.157 14:22:56 -- host/discovery.sh@67 -- # xargs 00:18:15.157 14:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.157 14:22:56 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:15.157 14:22:56 -- host/discovery.sh@146 -- # get_bdev_list 00:18:15.157 14:22:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:15.157 14:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.157 14:22:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:15.157 14:22:56 -- common/autotest_common.sh@10 -- # set +x 00:18:15.157 14:22:56 -- host/discovery.sh@55 -- # sort 00:18:15.157 14:22:56 -- host/discovery.sh@55 -- # xargs 00:18:15.157 14:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.157 14:22:56 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:15.157 14:22:56 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:15.157 14:22:56 -- common/autotest_common.sh@638 -- # local es=0 00:18:15.157 14:22:56 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:15.157 14:22:56 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:15.157 14:22:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:15.157 14:22:56 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:15.157 14:22:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:15.157 14:22:56 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:15.157 14:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.157 14:22:56 -- common/autotest_common.sh@10 -- # set +x 00:18:15.157 request: 00:18:15.157 { 00:18:15.157 "name": "nvme_second", 00:18:15.157 "trtype": "tcp", 00:18:15.157 "traddr": "10.0.0.2", 00:18:15.157 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:15.157 "adrfam": "ipv4", 00:18:15.157 "trsvcid": "8009", 00:18:15.157 "wait_for_attach": true, 00:18:15.157 "method": "bdev_nvme_start_discovery", 00:18:15.157 "req_id": 1 00:18:15.157 } 00:18:15.157 Got JSON-RPC error response 00:18:15.157 response: 00:18:15.157 { 00:18:15.157 "code": -17, 00:18:15.157 "message": "File exists" 00:18:15.157 } 00:18:15.157 14:22:56 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:15.157 14:22:56 -- common/autotest_common.sh@641 -- # es=1 00:18:15.157 14:22:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:15.157 14:22:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:15.157 14:22:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:15.157 14:22:56 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:15.157 14:22:56 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:15.157 14:22:56 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:15.157 14:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.157 14:22:56 -- common/autotest_common.sh@10 -- # set +x 00:18:15.157 14:22:56 -- host/discovery.sh@67 -- # sort 00:18:15.157 14:22:56 -- host/discovery.sh@67 -- # xargs 00:18:15.157 14:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.415 14:22:56 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:15.415 14:22:56 -- host/discovery.sh@152 -- # get_bdev_list 00:18:15.415 14:22:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:15.415 14:22:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:15.415 14:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.415 14:22:56 -- common/autotest_common.sh@10 -- # set +x 00:18:15.415 14:22:56 -- host/discovery.sh@55 -- # sort 00:18:15.415 14:22:56 -- host/discovery.sh@55 -- # xargs 00:18:15.415 14:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:15.415 14:22:56 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:15.415 14:22:56 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:15.415 14:22:56 -- common/autotest_common.sh@638 -- # local es=0 00:18:15.415 14:22:56 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:15.415 14:22:56 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:15.415 14:22:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:15.415 14:22:56 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:15.415 14:22:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:15.415 14:22:56 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:15.415 14:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:15.415 14:22:56 -- common/autotest_common.sh@10 -- # set +x 00:18:16.350 [2024-04-26 14:22:57.779340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:16.350 [2024-04-26 14:22:57.779511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:16.350 [2024-04-26 14:22:57.779540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f77e00 with addr=10.0.0.2, port=8010 00:18:16.350 [2024-04-26 14:22:57.779575] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:16.350 [2024-04-26 14:22:57.779592] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:16.350 [2024-04-26 14:22:57.779606] bdev_nvme.c:6981:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:17.283 [2024-04-26 14:22:58.781790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.283 [2024-04-26 14:22:58.781987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.283 [2024-04-26 14:22:58.782014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f77e00 with addr=10.0.0.2, port=8010 00:18:17.283 [2024-04-26 14:22:58.782040] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:17.283 [2024-04-26 14:22:58.782056] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:17.283 [2024-04-26 14:22:58.782070] bdev_nvme.c:6981:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:18.218 [2024-04-26 14:22:59.784009] bdev_nvme.c:6962:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:18:18.218 request: 00:18:18.218 { 00:18:18.218 "name": "nvme_second", 00:18:18.218 "trtype": "tcp", 00:18:18.218 "traddr": "10.0.0.2", 00:18:18.218 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:18.218 "adrfam": "ipv4", 00:18:18.218 "trsvcid": "8010", 00:18:18.218 "attach_timeout_ms": 3000, 00:18:18.477 "method": "bdev_nvme_start_discovery", 00:18:18.477 "req_id": 1 00:18:18.477 } 00:18:18.477 Got JSON-RPC error response 00:18:18.477 response: 00:18:18.477 { 00:18:18.477 "code": -110, 00:18:18.477 "message": "Connection timed out" 00:18:18.477 } 00:18:18.477 14:22:59 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:18.477 14:22:59 -- common/autotest_common.sh@641 -- # es=1 00:18:18.477 14:22:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:18.477 14:22:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:18.477 14:22:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:18.477 14:22:59 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:18.477 14:22:59 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:18.477 14:22:59 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:18.477 14:22:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:18.477 14:22:59 -- common/autotest_common.sh@10 -- # set +x 00:18:18.477 14:22:59 -- host/discovery.sh@67 -- # sort 00:18:18.477 14:22:59 -- host/discovery.sh@67 -- # xargs 00:18:18.477 14:22:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:18.477 14:22:59 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:18.477 14:22:59 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:18.477 14:22:59 -- host/discovery.sh@161 -- # kill 3189621 00:18:18.477 14:22:59 -- host/discovery.sh@162 -- # nvmftestfini 00:18:18.477 14:22:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:18.477 14:22:59 -- nvmf/common.sh@117 -- # sync 00:18:18.477 14:22:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:18.477 14:22:59 -- nvmf/common.sh@120 -- # set +e 00:18:18.477 14:22:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:18.477 14:22:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:18.477 rmmod nvme_tcp 00:18:18.477 rmmod nvme_fabrics 00:18:18.477 rmmod nvme_keyring 00:18:18.477 14:22:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:18.477 14:22:59 -- nvmf/common.sh@124 -- # set -e 00:18:18.477 14:22:59 -- nvmf/common.sh@125 -- # return 0 00:18:18.477 14:22:59 -- nvmf/common.sh@478 -- # '[' -n 3189597 ']' 00:18:18.477 14:22:59 -- nvmf/common.sh@479 -- # killprocess 3189597 00:18:18.477 14:22:59 -- common/autotest_common.sh@936 -- # '[' -z 3189597 ']' 00:18:18.477 14:22:59 -- common/autotest_common.sh@940 -- # kill -0 3189597 00:18:18.477 14:22:59 -- common/autotest_common.sh@941 -- # uname 00:18:18.477 14:22:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.477 14:22:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3189597 00:18:18.477 14:22:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:18.477 14:22:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:18.477 14:22:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3189597' 00:18:18.477 killing process with pid 3189597 00:18:18.477 14:22:59 -- common/autotest_common.sh@955 -- # kill 3189597 00:18:18.477 14:22:59 -- common/autotest_common.sh@960 -- # wait 3189597 00:18:18.738 14:23:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:18.738 14:23:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:18.738 14:23:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:18.738 14:23:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:18.738 14:23:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:18.738 14:23:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.738 14:23:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.738 14:23:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.730 14:23:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:20.730 00:18:20.730 real 0m13.734s 00:18:20.730 user 0m21.120s 00:18:20.730 sys 0m2.448s 00:18:20.730 14:23:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:20.730 14:23:02 -- common/autotest_common.sh@10 -- # set +x 00:18:20.730 ************************************ 00:18:20.730 END TEST nvmf_discovery 00:18:20.730 ************************************ 00:18:20.730 14:23:02 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:20.730 14:23:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:20.730 14:23:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:20.730 14:23:02 -- common/autotest_common.sh@10 -- # set +x 00:18:20.989 ************************************ 00:18:20.989 START TEST nvmf_discovery_remove_ifc 00:18:20.989 ************************************ 00:18:20.989 14:23:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:20.989 * Looking for test storage... 00:18:20.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:20.989 14:23:02 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:20.989 14:23:02 -- nvmf/common.sh@7 -- # uname -s 00:18:20.989 14:23:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.989 14:23:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.989 14:23:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.989 14:23:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.989 14:23:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.989 14:23:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.989 14:23:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.989 14:23:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.989 14:23:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.989 14:23:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.989 14:23:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:20.989 14:23:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:20.989 14:23:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.989 14:23:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.989 14:23:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:20.989 14:23:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:20.989 14:23:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:20.989 14:23:02 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.989 14:23:02 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.989 14:23:02 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.989 14:23:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.989 14:23:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.989 14:23:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.989 14:23:02 -- paths/export.sh@5 -- # export PATH 00:18:20.989 14:23:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.989 14:23:02 -- nvmf/common.sh@47 -- # : 0 00:18:20.989 14:23:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:20.989 14:23:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:20.989 14:23:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:20.989 14:23:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.989 14:23:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.989 14:23:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:20.989 14:23:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:20.989 14:23:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:20.989 14:23:02 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:20.989 14:23:02 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:20.989 14:23:02 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:20.989 14:23:02 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:20.989 14:23:02 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:20.989 14:23:02 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:20.989 14:23:02 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:20.989 14:23:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:20.989 14:23:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.989 14:23:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:20.989 14:23:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:20.989 14:23:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:20.989 14:23:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.989 14:23:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.989 14:23:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.989 14:23:02 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:20.989 14:23:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:20.989 14:23:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:20.989 14:23:02 -- common/autotest_common.sh@10 -- # set +x 00:18:22.893 14:23:03 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:22.893 14:23:03 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:22.893 14:23:03 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:22.893 14:23:03 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:22.893 14:23:03 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:22.893 14:23:03 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:22.893 14:23:03 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:22.893 14:23:03 -- nvmf/common.sh@295 -- # net_devs=() 00:18:22.893 14:23:03 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:22.893 14:23:03 -- nvmf/common.sh@296 -- # e810=() 00:18:22.893 14:23:03 -- nvmf/common.sh@296 -- # local -ga e810 00:18:22.893 14:23:03 -- nvmf/common.sh@297 -- # x722=() 00:18:22.893 14:23:03 -- nvmf/common.sh@297 -- # local -ga x722 00:18:22.893 14:23:03 -- nvmf/common.sh@298 -- # mlx=() 00:18:22.893 14:23:03 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:22.893 14:23:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:22.893 14:23:03 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:22.893 14:23:03 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:22.893 14:23:03 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:22.893 14:23:03 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:22.893 14:23:03 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:22.893 14:23:03 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:22.893 14:23:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:22.893 14:23:03 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:22.893 14:23:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:22.893 14:23:03 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:22.893 14:23:03 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:22.893 14:23:03 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:22.893 14:23:03 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:22.893 14:23:03 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:22.893 14:23:03 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:22.893 14:23:03 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:22.893 14:23:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:22.893 14:23:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:22.893 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:22.893 14:23:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:22.893 14:23:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:22.893 14:23:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:22.893 14:23:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:22.893 14:23:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:22.893 14:23:03 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:22.893 14:23:03 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:22.893 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:22.893 14:23:03 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:22.893 14:23:03 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:22.893 14:23:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:22.893 14:23:03 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:22.893 14:23:03 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:22.893 14:23:03 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:22.893 14:23:03 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:22.893 14:23:03 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:22.893 14:23:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:22.893 14:23:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:22.893 14:23:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:22.893 14:23:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:22.893 14:23:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:22.893 Found net devices under 0000:08:00.0: cvl_0_0 00:18:22.893 14:23:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:22.893 14:23:03 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:22.893 14:23:03 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:22.893 14:23:03 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:22.893 14:23:03 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:22.893 14:23:03 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:22.893 Found net devices under 0000:08:00.1: cvl_0_1 00:18:22.893 14:23:03 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:22.893 14:23:03 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:22.893 14:23:03 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:22.893 14:23:03 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:22.893 14:23:03 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:22.893 14:23:03 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:22.893 14:23:03 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:22.893 14:23:03 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:22.893 14:23:03 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:22.893 14:23:03 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:22.893 14:23:03 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:22.893 14:23:03 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:22.893 14:23:03 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:22.893 14:23:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:22.893 14:23:03 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:22.893 14:23:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:22.893 14:23:03 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:22.893 14:23:03 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:22.893 14:23:03 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:22.893 14:23:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:22.893 14:23:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:22.893 14:23:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:22.893 14:23:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:22.893 14:23:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:22.893 14:23:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:22.893 14:23:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:22.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:18:22.893 00:18:22.893 --- 10.0.0.2 ping statistics --- 00:18:22.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.893 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:18:22.893 14:23:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:22.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:18:22.893 00:18:22.893 --- 10.0.0.1 ping statistics --- 00:18:22.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.893 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:18:22.893 14:23:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.893 14:23:04 -- nvmf/common.sh@411 -- # return 0 00:18:22.893 14:23:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:22.893 14:23:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.893 14:23:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:22.893 14:23:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:22.893 14:23:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.893 14:23:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:22.893 14:23:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:22.893 14:23:04 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:22.893 14:23:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:22.893 14:23:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:22.893 14:23:04 -- common/autotest_common.sh@10 -- # set +x 00:18:22.893 14:23:04 -- nvmf/common.sh@470 -- # nvmfpid=3192196 00:18:22.893 14:23:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:22.893 14:23:04 -- nvmf/common.sh@471 -- # waitforlisten 3192196 00:18:22.893 14:23:04 -- common/autotest_common.sh@817 -- # '[' -z 3192196 ']' 00:18:22.893 14:23:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.893 14:23:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:22.893 14:23:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.893 14:23:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:22.893 14:23:04 -- common/autotest_common.sh@10 -- # set +x 00:18:22.893 [2024-04-26 14:23:04.168289] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:18:22.893 [2024-04-26 14:23:04.168393] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.893 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.893 [2024-04-26 14:23:04.233713] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.893 [2024-04-26 14:23:04.350747] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.893 [2024-04-26 14:23:04.350811] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.893 [2024-04-26 14:23:04.350828] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.893 [2024-04-26 14:23:04.350841] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.894 [2024-04-26 14:23:04.350852] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.894 [2024-04-26 14:23:04.350884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.894 14:23:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:22.894 14:23:04 -- common/autotest_common.sh@850 -- # return 0 00:18:22.894 14:23:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:22.894 14:23:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:22.894 14:23:04 -- common/autotest_common.sh@10 -- # set +x 00:18:23.152 14:23:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.152 14:23:04 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:23.152 14:23:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:23.152 14:23:04 -- common/autotest_common.sh@10 -- # set +x 00:18:23.152 [2024-04-26 14:23:04.498841] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.152 [2024-04-26 14:23:04.506987] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:23.152 null0 00:18:23.152 [2024-04-26 14:23:04.538949] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:23.152 14:23:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:23.152 14:23:04 -- host/discovery_remove_ifc.sh@59 -- # hostpid=3192218 00:18:23.152 14:23:04 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:23.152 14:23:04 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3192218 /tmp/host.sock 00:18:23.152 14:23:04 -- common/autotest_common.sh@817 -- # '[' -z 3192218 ']' 00:18:23.152 14:23:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:18:23.152 14:23:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:23.152 14:23:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:23.152 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:23.152 14:23:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:23.152 14:23:04 -- common/autotest_common.sh@10 -- # set +x 00:18:23.152 [2024-04-26 14:23:04.607351] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:18:23.152 [2024-04-26 14:23:04.607444] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3192218 ] 00:18:23.152 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.152 [2024-04-26 14:23:04.667906] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.411 [2024-04-26 14:23:04.785788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.411 14:23:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:23.411 14:23:04 -- common/autotest_common.sh@850 -- # return 0 00:18:23.411 14:23:04 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:23.411 14:23:04 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:23.411 14:23:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:23.411 14:23:04 -- common/autotest_common.sh@10 -- # set +x 00:18:23.411 14:23:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:23.411 14:23:04 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:23.411 14:23:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:23.411 14:23:04 -- common/autotest_common.sh@10 -- # set +x 00:18:23.411 14:23:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:23.411 14:23:04 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:23.411 14:23:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:23.411 14:23:04 -- common/autotest_common.sh@10 -- # set +x 00:18:24.786 [2024-04-26 14:23:06.006396] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:24.786 [2024-04-26 14:23:06.006432] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:24.786 [2024-04-26 14:23:06.006457] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:24.786 [2024-04-26 14:23:06.134912] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:24.786 [2024-04-26 14:23:06.238507] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:24.786 [2024-04-26 14:23:06.238569] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:24.786 [2024-04-26 14:23:06.238616] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:24.786 [2024-04-26 14:23:06.238652] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:24.786 [2024-04-26 14:23:06.238693] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:24.786 14:23:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.786 14:23:06 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:24.786 14:23:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:24.786 14:23:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:24.786 14:23:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:24.786 14:23:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.786 14:23:06 -- common/autotest_common.sh@10 -- # set +x 00:18:24.786 14:23:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:24.786 14:23:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:24.786 [2024-04-26 14:23:06.244888] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19e7a00 was disconnected and freed. delete nvme_qpair. 00:18:24.786 14:23:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:24.786 14:23:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:24.786 14:23:06 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:18:24.786 14:23:06 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:18:24.786 14:23:06 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:24.786 14:23:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:24.786 14:23:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:24.786 14:23:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:24.786 14:23:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:24.786 14:23:06 -- common/autotest_common.sh@10 -- # set +x 00:18:24.786 14:23:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:24.786 14:23:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:24.786 14:23:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.044 14:23:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:25.044 14:23:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:25.979 14:23:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:25.979 14:23:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:25.979 14:23:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:25.979 14:23:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.979 14:23:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:25.979 14:23:07 -- common/autotest_common.sh@10 -- # set +x 00:18:25.979 14:23:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:25.979 14:23:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.979 14:23:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:25.979 14:23:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:26.912 14:23:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:26.912 14:23:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:26.912 14:23:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:26.912 14:23:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:26.912 14:23:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:26.912 14:23:08 -- common/autotest_common.sh@10 -- # set +x 00:18:26.912 14:23:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:26.912 14:23:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:26.912 14:23:08 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:26.912 14:23:08 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:28.286 14:23:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:28.286 14:23:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:28.286 14:23:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:28.286 14:23:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.286 14:23:09 -- common/autotest_common.sh@10 -- # set +x 00:18:28.286 14:23:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:28.286 14:23:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:28.286 14:23:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.286 14:23:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:28.286 14:23:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:29.220 14:23:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:29.220 14:23:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:29.220 14:23:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:29.220 14:23:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:29.220 14:23:10 -- common/autotest_common.sh@10 -- # set +x 00:18:29.220 14:23:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:29.220 14:23:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:29.220 14:23:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:29.220 14:23:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:29.220 14:23:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:30.153 14:23:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:30.153 14:23:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:30.153 14:23:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:30.153 14:23:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:30.153 14:23:11 -- common/autotest_common.sh@10 -- # set +x 00:18:30.153 14:23:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:30.153 14:23:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:30.153 14:23:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:30.153 14:23:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:30.153 14:23:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:30.153 [2024-04-26 14:23:11.679497] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:18:30.153 [2024-04-26 14:23:11.679562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.153 [2024-04-26 14:23:11.679585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.153 [2024-04-26 14:23:11.679604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.153 [2024-04-26 14:23:11.679619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.153 [2024-04-26 14:23:11.679643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.153 [2024-04-26 14:23:11.679660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.153 [2024-04-26 14:23:11.679676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.153 [2024-04-26 14:23:11.679691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.153 [2024-04-26 14:23:11.679707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:30.154 [2024-04-26 14:23:11.679722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:30.154 [2024-04-26 14:23:11.679737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19adf70 is same with the state(5) to be set 00:18:30.154 [2024-04-26 14:23:11.689517] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19adf70 (9): Bad file descriptor 00:18:30.154 [2024-04-26 14:23:11.699564] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:31.086 14:23:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:31.086 14:23:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:31.086 14:23:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:31.086 14:23:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.086 14:23:12 -- common/autotest_common.sh@10 -- # set +x 00:18:31.086 14:23:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:31.086 14:23:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:31.343 [2024-04-26 14:23:12.734674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:18:32.273 [2024-04-26 14:23:13.758673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:18:32.273 [2024-04-26 14:23:13.758733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19adf70 with addr=10.0.0.2, port=4420 00:18:32.273 [2024-04-26 14:23:13.758759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19adf70 is same with the state(5) to be set 00:18:32.273 [2024-04-26 14:23:13.759265] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19adf70 (9): Bad file descriptor 00:18:32.273 [2024-04-26 14:23:13.759308] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:32.273 [2024-04-26 14:23:13.759350] bdev_nvme.c:6670:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:18:32.273 [2024-04-26 14:23:13.759388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:32.273 [2024-04-26 14:23:13.759409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.273 [2024-04-26 14:23:13.759428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:32.273 [2024-04-26 14:23:13.759444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.273 [2024-04-26 14:23:13.759459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:32.273 [2024-04-26 14:23:13.759474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.273 [2024-04-26 14:23:13.759490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:32.273 [2024-04-26 14:23:13.759505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.273 [2024-04-26 14:23:13.759520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:32.273 [2024-04-26 14:23:13.759535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.273 [2024-04-26 14:23:13.759549] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:18:32.273 [2024-04-26 14:23:13.759796] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ae380 (9): Bad file descriptor 00:18:32.273 [2024-04-26 14:23:13.760815] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:18:32.274 [2024-04-26 14:23:13.760838] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:18:32.274 14:23:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:32.274 14:23:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:32.274 14:23:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:33.646 14:23:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:33.646 14:23:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:33.646 14:23:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:33.646 14:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:33.646 14:23:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:33.646 14:23:14 -- common/autotest_common.sh@10 -- # set +x 00:18:33.646 14:23:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:33.646 14:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:33.646 14:23:14 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:18:33.646 14:23:14 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:33.646 14:23:14 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:33.646 14:23:14 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:18:33.646 14:23:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:33.646 14:23:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:33.646 14:23:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:33.646 14:23:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:33.646 14:23:14 -- common/autotest_common.sh@10 -- # set +x 00:18:33.646 14:23:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:33.646 14:23:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:33.646 14:23:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:33.646 14:23:14 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:33.646 14:23:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:34.211 [2024-04-26 14:23:15.772243] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:34.211 [2024-04-26 14:23:15.772277] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:34.211 [2024-04-26 14:23:15.772302] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:34.470 14:23:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:34.470 [2024-04-26 14:23:15.900715] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:18:34.470 14:23:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:34.470 14:23:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:34.470 14:23:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:34.470 14:23:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:34.470 14:23:15 -- common/autotest_common.sh@10 -- # set +x 00:18:34.470 14:23:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:34.470 14:23:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:34.470 14:23:15 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:34.470 14:23:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:34.728 [2024-04-26 14:23:16.083790] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:34.728 [2024-04-26 14:23:16.083843] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:34.728 [2024-04-26 14:23:16.083880] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:34.728 [2024-04-26 14:23:16.083906] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:18:34.728 [2024-04-26 14:23:16.083921] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:34.728 [2024-04-26 14:23:16.091112] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19be730 was disconnected and freed. delete nvme_qpair. 00:18:35.722 14:23:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:35.722 14:23:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:35.722 14:23:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:35.722 14:23:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:35.722 14:23:16 -- common/autotest_common.sh@10 -- # set +x 00:18:35.722 14:23:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:35.722 14:23:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:35.722 14:23:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:35.722 14:23:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:18:35.722 14:23:16 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:18:35.722 14:23:16 -- host/discovery_remove_ifc.sh@90 -- # killprocess 3192218 00:18:35.722 14:23:16 -- common/autotest_common.sh@936 -- # '[' -z 3192218 ']' 00:18:35.722 14:23:16 -- common/autotest_common.sh@940 -- # kill -0 3192218 00:18:35.722 14:23:16 -- common/autotest_common.sh@941 -- # uname 00:18:35.722 14:23:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:35.722 14:23:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3192218 00:18:35.722 14:23:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:35.722 14:23:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:35.722 14:23:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3192218' 00:18:35.722 killing process with pid 3192218 00:18:35.722 14:23:17 -- common/autotest_common.sh@955 -- # kill 3192218 00:18:35.722 14:23:17 -- common/autotest_common.sh@960 -- # wait 3192218 00:18:35.722 14:23:17 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:18:35.722 14:23:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:35.722 14:23:17 -- nvmf/common.sh@117 -- # sync 00:18:35.722 14:23:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:35.722 14:23:17 -- nvmf/common.sh@120 -- # set +e 00:18:35.722 14:23:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:35.722 14:23:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:35.722 rmmod nvme_tcp 00:18:35.722 rmmod nvme_fabrics 00:18:35.722 rmmod nvme_keyring 00:18:35.981 14:23:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:35.981 14:23:17 -- nvmf/common.sh@124 -- # set -e 00:18:35.981 14:23:17 -- nvmf/common.sh@125 -- # return 0 00:18:35.981 14:23:17 -- nvmf/common.sh@478 -- # '[' -n 3192196 ']' 00:18:35.981 14:23:17 -- nvmf/common.sh@479 -- # killprocess 3192196 00:18:35.981 14:23:17 -- common/autotest_common.sh@936 -- # '[' -z 3192196 ']' 00:18:35.981 14:23:17 -- common/autotest_common.sh@940 -- # kill -0 3192196 00:18:35.981 14:23:17 -- common/autotest_common.sh@941 -- # uname 00:18:35.981 14:23:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:35.981 14:23:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3192196 00:18:35.981 14:23:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:35.981 14:23:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:35.981 14:23:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3192196' 00:18:35.981 killing process with pid 3192196 00:18:35.981 14:23:17 -- common/autotest_common.sh@955 -- # kill 3192196 00:18:35.981 14:23:17 -- common/autotest_common.sh@960 -- # wait 3192196 00:18:35.981 14:23:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:35.981 14:23:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:35.981 14:23:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:35.981 14:23:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:35.981 14:23:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:35.981 14:23:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.981 14:23:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.981 14:23:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.527 14:23:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:38.527 00:18:38.527 real 0m17.268s 00:18:38.527 user 0m24.562s 00:18:38.527 sys 0m2.600s 00:18:38.527 14:23:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:38.527 14:23:19 -- common/autotest_common.sh@10 -- # set +x 00:18:38.527 ************************************ 00:18:38.527 END TEST nvmf_discovery_remove_ifc 00:18:38.527 ************************************ 00:18:38.527 14:23:19 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:38.527 14:23:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:38.527 14:23:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:38.527 14:23:19 -- common/autotest_common.sh@10 -- # set +x 00:18:38.527 ************************************ 00:18:38.527 START TEST nvmf_identify_kernel_target 00:18:38.527 ************************************ 00:18:38.527 14:23:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:38.527 * Looking for test storage... 00:18:38.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:38.527 14:23:19 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.527 14:23:19 -- nvmf/common.sh@7 -- # uname -s 00:18:38.527 14:23:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.527 14:23:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.527 14:23:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.527 14:23:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.527 14:23:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.527 14:23:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.527 14:23:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.527 14:23:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.527 14:23:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.527 14:23:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.527 14:23:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:38.527 14:23:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:38.527 14:23:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.527 14:23:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.527 14:23:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:38.527 14:23:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.527 14:23:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:38.527 14:23:19 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.527 14:23:19 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.527 14:23:19 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.527 14:23:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.528 14:23:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.528 14:23:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.528 14:23:19 -- paths/export.sh@5 -- # export PATH 00:18:38.528 14:23:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.528 14:23:19 -- nvmf/common.sh@47 -- # : 0 00:18:38.528 14:23:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:38.528 14:23:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:38.528 14:23:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.528 14:23:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.528 14:23:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.528 14:23:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:38.528 14:23:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:38.528 14:23:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:38.528 14:23:19 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:38.528 14:23:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:38.528 14:23:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.528 14:23:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:38.528 14:23:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:38.528 14:23:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:38.528 14:23:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.528 14:23:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:38.528 14:23:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.528 14:23:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:38.528 14:23:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:38.528 14:23:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:38.528 14:23:19 -- common/autotest_common.sh@10 -- # set +x 00:18:39.906 14:23:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:39.906 14:23:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:39.906 14:23:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:39.906 14:23:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:39.906 14:23:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:39.906 14:23:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:39.906 14:23:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:39.906 14:23:21 -- nvmf/common.sh@295 -- # net_devs=() 00:18:39.906 14:23:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:39.906 14:23:21 -- nvmf/common.sh@296 -- # e810=() 00:18:39.906 14:23:21 -- nvmf/common.sh@296 -- # local -ga e810 00:18:39.906 14:23:21 -- nvmf/common.sh@297 -- # x722=() 00:18:39.906 14:23:21 -- nvmf/common.sh@297 -- # local -ga x722 00:18:39.906 14:23:21 -- nvmf/common.sh@298 -- # mlx=() 00:18:39.906 14:23:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:39.906 14:23:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:39.906 14:23:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:39.906 14:23:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:39.906 14:23:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:39.906 14:23:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:39.906 14:23:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:39.906 14:23:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:39.906 14:23:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:39.906 14:23:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:39.906 14:23:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:39.906 14:23:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:39.906 14:23:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:39.906 14:23:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:39.906 14:23:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:39.906 14:23:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:39.906 14:23:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:39.906 14:23:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:39.906 14:23:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:39.906 14:23:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:39.906 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:39.906 14:23:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:39.906 14:23:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:39.906 14:23:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.906 14:23:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.906 14:23:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:39.906 14:23:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:39.906 14:23:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:39.906 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:39.906 14:23:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:39.906 14:23:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:39.906 14:23:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.906 14:23:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.906 14:23:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:39.906 14:23:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:39.906 14:23:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:39.906 14:23:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:39.906 14:23:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:39.906 14:23:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.906 14:23:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:39.906 14:23:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.906 14:23:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:39.906 Found net devices under 0000:08:00.0: cvl_0_0 00:18:39.906 14:23:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.906 14:23:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:39.906 14:23:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.906 14:23:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:39.906 14:23:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.906 14:23:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:39.906 Found net devices under 0000:08:00.1: cvl_0_1 00:18:39.906 14:23:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.906 14:23:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:39.906 14:23:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:39.906 14:23:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:39.906 14:23:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:39.906 14:23:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:39.906 14:23:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:39.906 14:23:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:39.906 14:23:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:39.906 14:23:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:39.906 14:23:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:39.906 14:23:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:39.906 14:23:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:39.906 14:23:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:39.906 14:23:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:39.906 14:23:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:39.906 14:23:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:39.906 14:23:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:39.906 14:23:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:40.165 14:23:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:40.165 14:23:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:40.165 14:23:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:40.165 14:23:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:40.165 14:23:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:40.165 14:23:21 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:40.165 14:23:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:40.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:18:40.165 00:18:40.165 --- 10.0.0.2 ping statistics --- 00:18:40.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.165 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:18:40.165 14:23:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:40.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:18:40.165 00:18:40.165 --- 10.0.0.1 ping statistics --- 00:18:40.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.165 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:18:40.165 14:23:21 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.165 14:23:21 -- nvmf/common.sh@411 -- # return 0 00:18:40.165 14:23:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:40.165 14:23:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.165 14:23:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:40.165 14:23:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:40.165 14:23:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.165 14:23:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:40.165 14:23:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:40.165 14:23:21 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:40.165 14:23:21 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:40.165 14:23:21 -- nvmf/common.sh@717 -- # local ip 00:18:40.165 14:23:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:40.165 14:23:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:40.165 14:23:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.165 14:23:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.165 14:23:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:40.165 14:23:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.165 14:23:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:40.165 14:23:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:40.165 14:23:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:40.165 14:23:21 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:40.165 14:23:21 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:40.165 14:23:21 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:40.165 14:23:21 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:18:40.165 14:23:21 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:40.165 14:23:21 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:40.165 14:23:21 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:40.165 14:23:21 -- nvmf/common.sh@628 -- # local block nvme 00:18:40.165 14:23:21 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:18:40.165 14:23:21 -- nvmf/common.sh@631 -- # modprobe nvmet 00:18:40.165 14:23:21 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:40.165 14:23:21 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:18:41.100 Waiting for block devices as requested 00:18:41.100 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:18:41.100 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:18:41.100 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:18:41.100 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:18:41.360 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:18:41.360 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:18:41.360 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:18:41.360 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:18:41.620 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:18:41.620 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:18:41.620 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:18:41.884 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:18:41.884 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:18:41.884 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:18:41.884 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:18:42.143 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:18:42.143 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:18:42.143 14:23:23 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:18:42.143 14:23:23 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:42.143 14:23:23 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:18:42.143 14:23:23 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:18:42.143 14:23:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:42.143 14:23:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:42.143 14:23:23 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:18:42.143 14:23:23 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:42.143 14:23:23 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:18:42.143 No valid GPT data, bailing 00:18:42.143 14:23:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:42.143 14:23:23 -- scripts/common.sh@391 -- # pt= 00:18:42.143 14:23:23 -- scripts/common.sh@392 -- # return 1 00:18:42.143 14:23:23 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:18:42.143 14:23:23 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:18:42.143 14:23:23 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:42.143 14:23:23 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:42.143 14:23:23 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:42.143 14:23:23 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:42.143 14:23:23 -- nvmf/common.sh@656 -- # echo 1 00:18:42.143 14:23:23 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:18:42.143 14:23:23 -- nvmf/common.sh@658 -- # echo 1 00:18:42.143 14:23:23 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:18:42.143 14:23:23 -- nvmf/common.sh@661 -- # echo tcp 00:18:42.143 14:23:23 -- nvmf/common.sh@662 -- # echo 4420 00:18:42.143 14:23:23 -- nvmf/common.sh@663 -- # echo ipv4 00:18:42.143 14:23:23 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:42.143 14:23:23 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:18:42.403 00:18:42.403 Discovery Log Number of Records 2, Generation counter 2 00:18:42.403 =====Discovery Log Entry 0====== 00:18:42.403 trtype: tcp 00:18:42.403 adrfam: ipv4 00:18:42.403 subtype: current discovery subsystem 00:18:42.403 treq: not specified, sq flow control disable supported 00:18:42.403 portid: 1 00:18:42.403 trsvcid: 4420 00:18:42.403 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:42.403 traddr: 10.0.0.1 00:18:42.403 eflags: none 00:18:42.403 sectype: none 00:18:42.403 =====Discovery Log Entry 1====== 00:18:42.403 trtype: tcp 00:18:42.403 adrfam: ipv4 00:18:42.403 subtype: nvme subsystem 00:18:42.403 treq: not specified, sq flow control disable supported 00:18:42.403 portid: 1 00:18:42.403 trsvcid: 4420 00:18:42.403 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:42.403 traddr: 10.0.0.1 00:18:42.403 eflags: none 00:18:42.403 sectype: none 00:18:42.403 14:23:23 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:42.403 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:42.403 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.403 ===================================================== 00:18:42.403 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:42.403 ===================================================== 00:18:42.403 Controller Capabilities/Features 00:18:42.403 ================================ 00:18:42.403 Vendor ID: 0000 00:18:42.403 Subsystem Vendor ID: 0000 00:18:42.403 Serial Number: 89d33ec0b709bf25b6fc 00:18:42.403 Model Number: Linux 00:18:42.403 Firmware Version: 6.7.0-68 00:18:42.403 Recommended Arb Burst: 0 00:18:42.403 IEEE OUI Identifier: 00 00 00 00:18:42.403 Multi-path I/O 00:18:42.403 May have multiple subsystem ports: No 00:18:42.403 May have multiple controllers: No 00:18:42.403 Associated with SR-IOV VF: No 00:18:42.403 Max Data Transfer Size: Unlimited 00:18:42.403 Max Number of Namespaces: 0 00:18:42.403 Max Number of I/O Queues: 1024 00:18:42.403 NVMe Specification Version (VS): 1.3 00:18:42.403 NVMe Specification Version (Identify): 1.3 00:18:42.403 Maximum Queue Entries: 1024 00:18:42.403 Contiguous Queues Required: No 00:18:42.403 Arbitration Mechanisms Supported 00:18:42.403 Weighted Round Robin: Not Supported 00:18:42.403 Vendor Specific: Not Supported 00:18:42.403 Reset Timeout: 7500 ms 00:18:42.403 Doorbell Stride: 4 bytes 00:18:42.403 NVM Subsystem Reset: Not Supported 00:18:42.403 Command Sets Supported 00:18:42.403 NVM Command Set: Supported 00:18:42.403 Boot Partition: Not Supported 00:18:42.403 Memory Page Size Minimum: 4096 bytes 00:18:42.403 Memory Page Size Maximum: 4096 bytes 00:18:42.403 Persistent Memory Region: Not Supported 00:18:42.403 Optional Asynchronous Events Supported 00:18:42.403 Namespace Attribute Notices: Not Supported 00:18:42.403 Firmware Activation Notices: Not Supported 00:18:42.403 ANA Change Notices: Not Supported 00:18:42.403 PLE Aggregate Log Change Notices: Not Supported 00:18:42.403 LBA Status Info Alert Notices: Not Supported 00:18:42.403 EGE Aggregate Log Change Notices: Not Supported 00:18:42.403 Normal NVM Subsystem Shutdown event: Not Supported 00:18:42.403 Zone Descriptor Change Notices: Not Supported 00:18:42.403 Discovery Log Change Notices: Supported 00:18:42.403 Controller Attributes 00:18:42.403 128-bit Host Identifier: Not Supported 00:18:42.403 Non-Operational Permissive Mode: Not Supported 00:18:42.403 NVM Sets: Not Supported 00:18:42.403 Read Recovery Levels: Not Supported 00:18:42.403 Endurance Groups: Not Supported 00:18:42.403 Predictable Latency Mode: Not Supported 00:18:42.403 Traffic Based Keep ALive: Not Supported 00:18:42.403 Namespace Granularity: Not Supported 00:18:42.403 SQ Associations: Not Supported 00:18:42.403 UUID List: Not Supported 00:18:42.403 Multi-Domain Subsystem: Not Supported 00:18:42.403 Fixed Capacity Management: Not Supported 00:18:42.403 Variable Capacity Management: Not Supported 00:18:42.403 Delete Endurance Group: Not Supported 00:18:42.403 Delete NVM Set: Not Supported 00:18:42.403 Extended LBA Formats Supported: Not Supported 00:18:42.403 Flexible Data Placement Supported: Not Supported 00:18:42.403 00:18:42.403 Controller Memory Buffer Support 00:18:42.403 ================================ 00:18:42.403 Supported: No 00:18:42.403 00:18:42.403 Persistent Memory Region Support 00:18:42.403 ================================ 00:18:42.403 Supported: No 00:18:42.403 00:18:42.403 Admin Command Set Attributes 00:18:42.403 ============================ 00:18:42.403 Security Send/Receive: Not Supported 00:18:42.403 Format NVM: Not Supported 00:18:42.403 Firmware Activate/Download: Not Supported 00:18:42.403 Namespace Management: Not Supported 00:18:42.403 Device Self-Test: Not Supported 00:18:42.403 Directives: Not Supported 00:18:42.403 NVMe-MI: Not Supported 00:18:42.403 Virtualization Management: Not Supported 00:18:42.403 Doorbell Buffer Config: Not Supported 00:18:42.403 Get LBA Status Capability: Not Supported 00:18:42.403 Command & Feature Lockdown Capability: Not Supported 00:18:42.403 Abort Command Limit: 1 00:18:42.403 Async Event Request Limit: 1 00:18:42.403 Number of Firmware Slots: N/A 00:18:42.403 Firmware Slot 1 Read-Only: N/A 00:18:42.403 Firmware Activation Without Reset: N/A 00:18:42.403 Multiple Update Detection Support: N/A 00:18:42.403 Firmware Update Granularity: No Information Provided 00:18:42.403 Per-Namespace SMART Log: No 00:18:42.403 Asymmetric Namespace Access Log Page: Not Supported 00:18:42.404 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:42.404 Command Effects Log Page: Not Supported 00:18:42.404 Get Log Page Extended Data: Supported 00:18:42.404 Telemetry Log Pages: Not Supported 00:18:42.404 Persistent Event Log Pages: Not Supported 00:18:42.404 Supported Log Pages Log Page: May Support 00:18:42.404 Commands Supported & Effects Log Page: Not Supported 00:18:42.404 Feature Identifiers & Effects Log Page:May Support 00:18:42.404 NVMe-MI Commands & Effects Log Page: May Support 00:18:42.404 Data Area 4 for Telemetry Log: Not Supported 00:18:42.404 Error Log Page Entries Supported: 1 00:18:42.404 Keep Alive: Not Supported 00:18:42.404 00:18:42.404 NVM Command Set Attributes 00:18:42.404 ========================== 00:18:42.404 Submission Queue Entry Size 00:18:42.404 Max: 1 00:18:42.404 Min: 1 00:18:42.404 Completion Queue Entry Size 00:18:42.404 Max: 1 00:18:42.404 Min: 1 00:18:42.404 Number of Namespaces: 0 00:18:42.404 Compare Command: Not Supported 00:18:42.404 Write Uncorrectable Command: Not Supported 00:18:42.404 Dataset Management Command: Not Supported 00:18:42.404 Write Zeroes Command: Not Supported 00:18:42.404 Set Features Save Field: Not Supported 00:18:42.404 Reservations: Not Supported 00:18:42.404 Timestamp: Not Supported 00:18:42.404 Copy: Not Supported 00:18:42.404 Volatile Write Cache: Not Present 00:18:42.404 Atomic Write Unit (Normal): 1 00:18:42.404 Atomic Write Unit (PFail): 1 00:18:42.404 Atomic Compare & Write Unit: 1 00:18:42.404 Fused Compare & Write: Not Supported 00:18:42.404 Scatter-Gather List 00:18:42.404 SGL Command Set: Supported 00:18:42.404 SGL Keyed: Not Supported 00:18:42.404 SGL Bit Bucket Descriptor: Not Supported 00:18:42.404 SGL Metadata Pointer: Not Supported 00:18:42.404 Oversized SGL: Not Supported 00:18:42.404 SGL Metadata Address: Not Supported 00:18:42.404 SGL Offset: Supported 00:18:42.404 Transport SGL Data Block: Not Supported 00:18:42.404 Replay Protected Memory Block: Not Supported 00:18:42.404 00:18:42.404 Firmware Slot Information 00:18:42.404 ========================= 00:18:42.404 Active slot: 0 00:18:42.404 00:18:42.404 00:18:42.404 Error Log 00:18:42.404 ========= 00:18:42.404 00:18:42.404 Active Namespaces 00:18:42.404 ================= 00:18:42.404 Discovery Log Page 00:18:42.404 ================== 00:18:42.404 Generation Counter: 2 00:18:42.404 Number of Records: 2 00:18:42.404 Record Format: 0 00:18:42.404 00:18:42.404 Discovery Log Entry 0 00:18:42.404 ---------------------- 00:18:42.404 Transport Type: 3 (TCP) 00:18:42.404 Address Family: 1 (IPv4) 00:18:42.404 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:42.404 Entry Flags: 00:18:42.404 Duplicate Returned Information: 0 00:18:42.404 Explicit Persistent Connection Support for Discovery: 0 00:18:42.404 Transport Requirements: 00:18:42.404 Secure Channel: Not Specified 00:18:42.404 Port ID: 1 (0x0001) 00:18:42.404 Controller ID: 65535 (0xffff) 00:18:42.404 Admin Max SQ Size: 32 00:18:42.404 Transport Service Identifier: 4420 00:18:42.404 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:42.404 Transport Address: 10.0.0.1 00:18:42.404 Discovery Log Entry 1 00:18:42.404 ---------------------- 00:18:42.404 Transport Type: 3 (TCP) 00:18:42.404 Address Family: 1 (IPv4) 00:18:42.404 Subsystem Type: 2 (NVM Subsystem) 00:18:42.404 Entry Flags: 00:18:42.404 Duplicate Returned Information: 0 00:18:42.404 Explicit Persistent Connection Support for Discovery: 0 00:18:42.404 Transport Requirements: 00:18:42.404 Secure Channel: Not Specified 00:18:42.404 Port ID: 1 (0x0001) 00:18:42.404 Controller ID: 65535 (0xffff) 00:18:42.404 Admin Max SQ Size: 32 00:18:42.404 Transport Service Identifier: 4420 00:18:42.404 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:42.404 Transport Address: 10.0.0.1 00:18:42.404 14:23:23 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:42.404 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.404 get_feature(0x01) failed 00:18:42.404 get_feature(0x02) failed 00:18:42.404 get_feature(0x04) failed 00:18:42.404 ===================================================== 00:18:42.404 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:42.404 ===================================================== 00:18:42.404 Controller Capabilities/Features 00:18:42.404 ================================ 00:18:42.404 Vendor ID: 0000 00:18:42.404 Subsystem Vendor ID: 0000 00:18:42.404 Serial Number: dcea40ae0395e90b8837 00:18:42.404 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:42.404 Firmware Version: 6.7.0-68 00:18:42.404 Recommended Arb Burst: 6 00:18:42.404 IEEE OUI Identifier: 00 00 00 00:18:42.404 Multi-path I/O 00:18:42.404 May have multiple subsystem ports: Yes 00:18:42.404 May have multiple controllers: Yes 00:18:42.404 Associated with SR-IOV VF: No 00:18:42.404 Max Data Transfer Size: Unlimited 00:18:42.404 Max Number of Namespaces: 1024 00:18:42.404 Max Number of I/O Queues: 128 00:18:42.404 NVMe Specification Version (VS): 1.3 00:18:42.404 NVMe Specification Version (Identify): 1.3 00:18:42.404 Maximum Queue Entries: 1024 00:18:42.404 Contiguous Queues Required: No 00:18:42.404 Arbitration Mechanisms Supported 00:18:42.404 Weighted Round Robin: Not Supported 00:18:42.404 Vendor Specific: Not Supported 00:18:42.404 Reset Timeout: 7500 ms 00:18:42.404 Doorbell Stride: 4 bytes 00:18:42.404 NVM Subsystem Reset: Not Supported 00:18:42.404 Command Sets Supported 00:18:42.404 NVM Command Set: Supported 00:18:42.404 Boot Partition: Not Supported 00:18:42.404 Memory Page Size Minimum: 4096 bytes 00:18:42.404 Memory Page Size Maximum: 4096 bytes 00:18:42.404 Persistent Memory Region: Not Supported 00:18:42.404 Optional Asynchronous Events Supported 00:18:42.404 Namespace Attribute Notices: Supported 00:18:42.404 Firmware Activation Notices: Not Supported 00:18:42.404 ANA Change Notices: Supported 00:18:42.404 PLE Aggregate Log Change Notices: Not Supported 00:18:42.404 LBA Status Info Alert Notices: Not Supported 00:18:42.404 EGE Aggregate Log Change Notices: Not Supported 00:18:42.404 Normal NVM Subsystem Shutdown event: Not Supported 00:18:42.404 Zone Descriptor Change Notices: Not Supported 00:18:42.404 Discovery Log Change Notices: Not Supported 00:18:42.404 Controller Attributes 00:18:42.404 128-bit Host Identifier: Supported 00:18:42.404 Non-Operational Permissive Mode: Not Supported 00:18:42.404 NVM Sets: Not Supported 00:18:42.404 Read Recovery Levels: Not Supported 00:18:42.404 Endurance Groups: Not Supported 00:18:42.404 Predictable Latency Mode: Not Supported 00:18:42.404 Traffic Based Keep ALive: Supported 00:18:42.404 Namespace Granularity: Not Supported 00:18:42.404 SQ Associations: Not Supported 00:18:42.404 UUID List: Not Supported 00:18:42.404 Multi-Domain Subsystem: Not Supported 00:18:42.405 Fixed Capacity Management: Not Supported 00:18:42.405 Variable Capacity Management: Not Supported 00:18:42.405 Delete Endurance Group: Not Supported 00:18:42.405 Delete NVM Set: Not Supported 00:18:42.405 Extended LBA Formats Supported: Not Supported 00:18:42.405 Flexible Data Placement Supported: Not Supported 00:18:42.405 00:18:42.405 Controller Memory Buffer Support 00:18:42.405 ================================ 00:18:42.405 Supported: No 00:18:42.405 00:18:42.405 Persistent Memory Region Support 00:18:42.405 ================================ 00:18:42.405 Supported: No 00:18:42.405 00:18:42.405 Admin Command Set Attributes 00:18:42.405 ============================ 00:18:42.405 Security Send/Receive: Not Supported 00:18:42.405 Format NVM: Not Supported 00:18:42.405 Firmware Activate/Download: Not Supported 00:18:42.405 Namespace Management: Not Supported 00:18:42.405 Device Self-Test: Not Supported 00:18:42.405 Directives: Not Supported 00:18:42.405 NVMe-MI: Not Supported 00:18:42.405 Virtualization Management: Not Supported 00:18:42.405 Doorbell Buffer Config: Not Supported 00:18:42.405 Get LBA Status Capability: Not Supported 00:18:42.405 Command & Feature Lockdown Capability: Not Supported 00:18:42.405 Abort Command Limit: 4 00:18:42.405 Async Event Request Limit: 4 00:18:42.405 Number of Firmware Slots: N/A 00:18:42.405 Firmware Slot 1 Read-Only: N/A 00:18:42.405 Firmware Activation Without Reset: N/A 00:18:42.405 Multiple Update Detection Support: N/A 00:18:42.405 Firmware Update Granularity: No Information Provided 00:18:42.405 Per-Namespace SMART Log: Yes 00:18:42.405 Asymmetric Namespace Access Log Page: Supported 00:18:42.405 ANA Transition Time : 10 sec 00:18:42.405 00:18:42.405 Asymmetric Namespace Access Capabilities 00:18:42.405 ANA Optimized State : Supported 00:18:42.405 ANA Non-Optimized State : Supported 00:18:42.405 ANA Inaccessible State : Supported 00:18:42.405 ANA Persistent Loss State : Supported 00:18:42.405 ANA Change State : Supported 00:18:42.405 ANAGRPID is not changed : No 00:18:42.405 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:42.405 00:18:42.405 ANA Group Identifier Maximum : 128 00:18:42.405 Number of ANA Group Identifiers : 128 00:18:42.405 Max Number of Allowed Namespaces : 1024 00:18:42.405 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:42.405 Command Effects Log Page: Supported 00:18:42.405 Get Log Page Extended Data: Supported 00:18:42.405 Telemetry Log Pages: Not Supported 00:18:42.405 Persistent Event Log Pages: Not Supported 00:18:42.405 Supported Log Pages Log Page: May Support 00:18:42.405 Commands Supported & Effects Log Page: Not Supported 00:18:42.405 Feature Identifiers & Effects Log Page:May Support 00:18:42.405 NVMe-MI Commands & Effects Log Page: May Support 00:18:42.405 Data Area 4 for Telemetry Log: Not Supported 00:18:42.405 Error Log Page Entries Supported: 128 00:18:42.405 Keep Alive: Supported 00:18:42.405 Keep Alive Granularity: 1000 ms 00:18:42.405 00:18:42.405 NVM Command Set Attributes 00:18:42.405 ========================== 00:18:42.405 Submission Queue Entry Size 00:18:42.405 Max: 64 00:18:42.405 Min: 64 00:18:42.405 Completion Queue Entry Size 00:18:42.405 Max: 16 00:18:42.405 Min: 16 00:18:42.405 Number of Namespaces: 1024 00:18:42.405 Compare Command: Not Supported 00:18:42.405 Write Uncorrectable Command: Not Supported 00:18:42.405 Dataset Management Command: Supported 00:18:42.405 Write Zeroes Command: Supported 00:18:42.405 Set Features Save Field: Not Supported 00:18:42.405 Reservations: Not Supported 00:18:42.405 Timestamp: Not Supported 00:18:42.405 Copy: Not Supported 00:18:42.405 Volatile Write Cache: Present 00:18:42.405 Atomic Write Unit (Normal): 1 00:18:42.405 Atomic Write Unit (PFail): 1 00:18:42.405 Atomic Compare & Write Unit: 1 00:18:42.405 Fused Compare & Write: Not Supported 00:18:42.405 Scatter-Gather List 00:18:42.405 SGL Command Set: Supported 00:18:42.405 SGL Keyed: Not Supported 00:18:42.405 SGL Bit Bucket Descriptor: Not Supported 00:18:42.405 SGL Metadata Pointer: Not Supported 00:18:42.405 Oversized SGL: Not Supported 00:18:42.405 SGL Metadata Address: Not Supported 00:18:42.405 SGL Offset: Supported 00:18:42.405 Transport SGL Data Block: Not Supported 00:18:42.405 Replay Protected Memory Block: Not Supported 00:18:42.405 00:18:42.405 Firmware Slot Information 00:18:42.405 ========================= 00:18:42.405 Active slot: 0 00:18:42.405 00:18:42.405 Asymmetric Namespace Access 00:18:42.405 =========================== 00:18:42.405 Change Count : 0 00:18:42.405 Number of ANA Group Descriptors : 1 00:18:42.405 ANA Group Descriptor : 0 00:18:42.405 ANA Group ID : 1 00:18:42.405 Number of NSID Values : 1 00:18:42.405 Change Count : 0 00:18:42.405 ANA State : 1 00:18:42.405 Namespace Identifier : 1 00:18:42.405 00:18:42.405 Commands Supported and Effects 00:18:42.405 ============================== 00:18:42.405 Admin Commands 00:18:42.405 -------------- 00:18:42.405 Get Log Page (02h): Supported 00:18:42.405 Identify (06h): Supported 00:18:42.405 Abort (08h): Supported 00:18:42.405 Set Features (09h): Supported 00:18:42.405 Get Features (0Ah): Supported 00:18:42.405 Asynchronous Event Request (0Ch): Supported 00:18:42.405 Keep Alive (18h): Supported 00:18:42.405 I/O Commands 00:18:42.405 ------------ 00:18:42.405 Flush (00h): Supported 00:18:42.405 Write (01h): Supported LBA-Change 00:18:42.405 Read (02h): Supported 00:18:42.405 Write Zeroes (08h): Supported LBA-Change 00:18:42.405 Dataset Management (09h): Supported 00:18:42.405 00:18:42.405 Error Log 00:18:42.405 ========= 00:18:42.405 Entry: 0 00:18:42.405 Error Count: 0x3 00:18:42.405 Submission Queue Id: 0x0 00:18:42.405 Command Id: 0x5 00:18:42.405 Phase Bit: 0 00:18:42.405 Status Code: 0x2 00:18:42.405 Status Code Type: 0x0 00:18:42.405 Do Not Retry: 1 00:18:42.405 Error Location: 0x28 00:18:42.405 LBA: 0x0 00:18:42.405 Namespace: 0x0 00:18:42.405 Vendor Log Page: 0x0 00:18:42.405 ----------- 00:18:42.405 Entry: 1 00:18:42.405 Error Count: 0x2 00:18:42.405 Submission Queue Id: 0x0 00:18:42.405 Command Id: 0x5 00:18:42.405 Phase Bit: 0 00:18:42.405 Status Code: 0x2 00:18:42.405 Status Code Type: 0x0 00:18:42.405 Do Not Retry: 1 00:18:42.405 Error Location: 0x28 00:18:42.405 LBA: 0x0 00:18:42.405 Namespace: 0x0 00:18:42.405 Vendor Log Page: 0x0 00:18:42.406 ----------- 00:18:42.406 Entry: 2 00:18:42.406 Error Count: 0x1 00:18:42.406 Submission Queue Id: 0x0 00:18:42.406 Command Id: 0x4 00:18:42.406 Phase Bit: 0 00:18:42.406 Status Code: 0x2 00:18:42.406 Status Code Type: 0x0 00:18:42.406 Do Not Retry: 1 00:18:42.406 Error Location: 0x28 00:18:42.406 LBA: 0x0 00:18:42.406 Namespace: 0x0 00:18:42.406 Vendor Log Page: 0x0 00:18:42.406 00:18:42.406 Number of Queues 00:18:42.406 ================ 00:18:42.406 Number of I/O Submission Queues: 128 00:18:42.406 Number of I/O Completion Queues: 128 00:18:42.406 00:18:42.406 ZNS Specific Controller Data 00:18:42.406 ============================ 00:18:42.406 Zone Append Size Limit: 0 00:18:42.406 00:18:42.406 00:18:42.406 Active Namespaces 00:18:42.406 ================= 00:18:42.406 get_feature(0x05) failed 00:18:42.406 Namespace ID:1 00:18:42.406 Command Set Identifier: NVM (00h) 00:18:42.406 Deallocate: Supported 00:18:42.406 Deallocated/Unwritten Error: Not Supported 00:18:42.406 Deallocated Read Value: Unknown 00:18:42.406 Deallocate in Write Zeroes: Not Supported 00:18:42.406 Deallocated Guard Field: 0xFFFF 00:18:42.406 Flush: Supported 00:18:42.406 Reservation: Not Supported 00:18:42.406 Namespace Sharing Capabilities: Multiple Controllers 00:18:42.406 Size (in LBAs): 1953525168 (931GiB) 00:18:42.406 Capacity (in LBAs): 1953525168 (931GiB) 00:18:42.406 Utilization (in LBAs): 1953525168 (931GiB) 00:18:42.406 UUID: 2f7b6766-a902-4998-b30b-a8bf6a82f166 00:18:42.406 Thin Provisioning: Not Supported 00:18:42.406 Per-NS Atomic Units: Yes 00:18:42.406 Atomic Boundary Size (Normal): 0 00:18:42.406 Atomic Boundary Size (PFail): 0 00:18:42.406 Atomic Boundary Offset: 0 00:18:42.406 NGUID/EUI64 Never Reused: No 00:18:42.406 ANA group ID: 1 00:18:42.406 Namespace Write Protected: No 00:18:42.406 Number of LBA Formats: 1 00:18:42.406 Current LBA Format: LBA Format #00 00:18:42.406 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:42.406 00:18:42.406 14:23:23 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:42.406 14:23:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:42.406 14:23:23 -- nvmf/common.sh@117 -- # sync 00:18:42.406 14:23:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:42.406 14:23:23 -- nvmf/common.sh@120 -- # set +e 00:18:42.406 14:23:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:42.406 14:23:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:42.406 rmmod nvme_tcp 00:18:42.406 rmmod nvme_fabrics 00:18:42.406 14:23:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:42.406 14:23:23 -- nvmf/common.sh@124 -- # set -e 00:18:42.406 14:23:23 -- nvmf/common.sh@125 -- # return 0 00:18:42.406 14:23:23 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:42.406 14:23:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:42.406 14:23:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:42.406 14:23:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:42.406 14:23:23 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:42.406 14:23:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:42.406 14:23:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.406 14:23:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.406 14:23:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.946 14:23:25 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:44.946 14:23:25 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:44.946 14:23:25 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:44.946 14:23:25 -- nvmf/common.sh@675 -- # echo 0 00:18:44.946 14:23:25 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:44.946 14:23:25 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:44.946 14:23:25 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:44.946 14:23:25 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:44.946 14:23:25 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:18:44.946 14:23:25 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:18:44.946 14:23:25 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:18:45.516 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:18:45.516 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:18:45.516 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:18:45.516 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:18:45.516 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:18:45.516 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:18:45.516 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:18:45.516 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:18:45.516 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:18:45.516 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:18:45.516 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:18:45.516 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:18:45.516 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:18:45.516 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:18:45.516 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:18:45.516 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:18:46.457 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:18:46.457 00:18:46.457 real 0m8.284s 00:18:46.457 user 0m1.650s 00:18:46.457 sys 0m2.824s 00:18:46.457 14:23:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:46.457 14:23:28 -- common/autotest_common.sh@10 -- # set +x 00:18:46.457 ************************************ 00:18:46.457 END TEST nvmf_identify_kernel_target 00:18:46.457 ************************************ 00:18:46.715 14:23:28 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:46.715 14:23:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:46.715 14:23:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:46.715 14:23:28 -- common/autotest_common.sh@10 -- # set +x 00:18:46.715 ************************************ 00:18:46.715 START TEST nvmf_auth 00:18:46.715 ************************************ 00:18:46.715 14:23:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:46.715 * Looking for test storage... 00:18:46.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:46.715 14:23:28 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:46.716 14:23:28 -- nvmf/common.sh@7 -- # uname -s 00:18:46.716 14:23:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.716 14:23:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.716 14:23:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.716 14:23:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.716 14:23:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.716 14:23:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.716 14:23:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.716 14:23:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.716 14:23:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.716 14:23:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.716 14:23:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:46.716 14:23:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:46.716 14:23:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.716 14:23:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.716 14:23:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.716 14:23:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.716 14:23:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:46.716 14:23:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.716 14:23:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.716 14:23:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.716 14:23:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.716 14:23:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.716 14:23:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.716 14:23:28 -- paths/export.sh@5 -- # export PATH 00:18:46.716 14:23:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.716 14:23:28 -- nvmf/common.sh@47 -- # : 0 00:18:46.716 14:23:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:46.716 14:23:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:46.716 14:23:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.716 14:23:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.716 14:23:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.716 14:23:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:46.716 14:23:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:46.716 14:23:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:46.716 14:23:28 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:46.716 14:23:28 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:46.716 14:23:28 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:18:46.716 14:23:28 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:18:46.716 14:23:28 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:46.716 14:23:28 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:46.716 14:23:28 -- host/auth.sh@21 -- # keys=() 00:18:46.716 14:23:28 -- host/auth.sh@77 -- # nvmftestinit 00:18:46.716 14:23:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:46.716 14:23:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.716 14:23:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:46.716 14:23:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:46.716 14:23:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:46.716 14:23:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.716 14:23:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.716 14:23:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.716 14:23:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:46.716 14:23:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:46.716 14:23:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:46.716 14:23:28 -- common/autotest_common.sh@10 -- # set +x 00:18:48.621 14:23:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:48.621 14:23:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:48.621 14:23:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:48.621 14:23:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:48.621 14:23:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:48.621 14:23:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:48.621 14:23:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:48.621 14:23:29 -- nvmf/common.sh@295 -- # net_devs=() 00:18:48.621 14:23:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:48.621 14:23:29 -- nvmf/common.sh@296 -- # e810=() 00:18:48.621 14:23:29 -- nvmf/common.sh@296 -- # local -ga e810 00:18:48.621 14:23:29 -- nvmf/common.sh@297 -- # x722=() 00:18:48.621 14:23:29 -- nvmf/common.sh@297 -- # local -ga x722 00:18:48.621 14:23:29 -- nvmf/common.sh@298 -- # mlx=() 00:18:48.621 14:23:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:48.621 14:23:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.621 14:23:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.621 14:23:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.621 14:23:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.621 14:23:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.621 14:23:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.621 14:23:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.621 14:23:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.621 14:23:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.621 14:23:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.621 14:23:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.621 14:23:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:48.621 14:23:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:48.621 14:23:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:48.621 14:23:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:48.621 14:23:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:48.621 14:23:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:48.621 14:23:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.621 14:23:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:48.621 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:48.621 14:23:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:48.621 14:23:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:48.621 14:23:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.621 14:23:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.621 14:23:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:48.621 14:23:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.621 14:23:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:48.621 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:48.621 14:23:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:48.621 14:23:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:48.621 14:23:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.621 14:23:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.621 14:23:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:48.621 14:23:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:48.621 14:23:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:48.621 14:23:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:48.621 14:23:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.622 14:23:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.622 14:23:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:48.622 14:23:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.622 14:23:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:48.622 Found net devices under 0000:08:00.0: cvl_0_0 00:18:48.622 14:23:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.622 14:23:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.622 14:23:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.622 14:23:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:48.622 14:23:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.622 14:23:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:48.622 Found net devices under 0000:08:00.1: cvl_0_1 00:18:48.622 14:23:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.622 14:23:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:48.622 14:23:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:48.622 14:23:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:48.622 14:23:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:48.622 14:23:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:48.622 14:23:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.622 14:23:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.622 14:23:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:48.622 14:23:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:48.622 14:23:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:48.622 14:23:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:48.622 14:23:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:48.622 14:23:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:48.622 14:23:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.622 14:23:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:48.622 14:23:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:48.622 14:23:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:48.622 14:23:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:48.622 14:23:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:48.622 14:23:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:48.622 14:23:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:48.622 14:23:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:48.622 14:23:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:48.622 14:23:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:48.622 14:23:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:48.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:18:48.622 00:18:48.622 --- 10.0.0.2 ping statistics --- 00:18:48.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.622 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:18:48.622 14:23:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:48.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:18:48.622 00:18:48.622 --- 10.0.0.1 ping statistics --- 00:18:48.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.622 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:18:48.622 14:23:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.622 14:23:29 -- nvmf/common.sh@411 -- # return 0 00:18:48.622 14:23:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:48.622 14:23:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.622 14:23:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:48.622 14:23:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:48.622 14:23:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.622 14:23:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:48.622 14:23:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:48.622 14:23:29 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:18:48.622 14:23:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:48.622 14:23:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:48.622 14:23:29 -- common/autotest_common.sh@10 -- # set +x 00:18:48.622 14:23:29 -- nvmf/common.sh@470 -- # nvmfpid=3197652 00:18:48.622 14:23:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:18:48.622 14:23:29 -- nvmf/common.sh@471 -- # waitforlisten 3197652 00:18:48.622 14:23:29 -- common/autotest_common.sh@817 -- # '[' -z 3197652 ']' 00:18:48.622 14:23:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.622 14:23:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:48.622 14:23:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.622 14:23:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:48.622 14:23:29 -- common/autotest_common.sh@10 -- # set +x 00:18:48.881 14:23:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:48.881 14:23:30 -- common/autotest_common.sh@850 -- # return 0 00:18:48.881 14:23:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:48.881 14:23:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:48.881 14:23:30 -- common/autotest_common.sh@10 -- # set +x 00:18:48.881 14:23:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.881 14:23:30 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:18:48.881 14:23:30 -- host/auth.sh@81 -- # gen_key null 32 00:18:48.881 14:23:30 -- host/auth.sh@53 -- # local digest len file key 00:18:48.881 14:23:30 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.881 14:23:30 -- host/auth.sh@54 -- # local -A digests 00:18:48.881 14:23:30 -- host/auth.sh@56 -- # digest=null 00:18:48.881 14:23:30 -- host/auth.sh@56 -- # len=32 00:18:48.881 14:23:30 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:48.881 14:23:30 -- host/auth.sh@57 -- # key=dbc87e656203b92173550a2b66c93206 00:18:48.881 14:23:30 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:18:48.881 14:23:30 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.rdu 00:18:48.881 14:23:30 -- host/auth.sh@59 -- # format_dhchap_key dbc87e656203b92173550a2b66c93206 0 00:18:48.881 14:23:30 -- nvmf/common.sh@708 -- # format_key DHHC-1 dbc87e656203b92173550a2b66c93206 0 00:18:48.881 14:23:30 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:48.881 14:23:30 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:18:48.881 14:23:30 -- nvmf/common.sh@693 -- # key=dbc87e656203b92173550a2b66c93206 00:18:48.881 14:23:30 -- nvmf/common.sh@693 -- # digest=0 00:18:48.881 14:23:30 -- nvmf/common.sh@694 -- # python - 00:18:48.881 14:23:30 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.rdu 00:18:48.881 14:23:30 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.rdu 00:18:48.881 14:23:30 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.rdu 00:18:48.881 14:23:30 -- host/auth.sh@82 -- # gen_key null 48 00:18:48.881 14:23:30 -- host/auth.sh@53 -- # local digest len file key 00:18:48.881 14:23:30 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.881 14:23:30 -- host/auth.sh@54 -- # local -A digests 00:18:48.881 14:23:30 -- host/auth.sh@56 -- # digest=null 00:18:48.881 14:23:30 -- host/auth.sh@56 -- # len=48 00:18:48.881 14:23:30 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:48.881 14:23:30 -- host/auth.sh@57 -- # key=7e0d573bc0a290416ba9ec839447bc33c5049422d01fb74c 00:18:48.881 14:23:30 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:18:48.881 14:23:30 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.ZrV 00:18:48.881 14:23:30 -- host/auth.sh@59 -- # format_dhchap_key 7e0d573bc0a290416ba9ec839447bc33c5049422d01fb74c 0 00:18:48.881 14:23:30 -- nvmf/common.sh@708 -- # format_key DHHC-1 7e0d573bc0a290416ba9ec839447bc33c5049422d01fb74c 0 00:18:48.881 14:23:30 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:48.881 14:23:30 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:18:48.881 14:23:30 -- nvmf/common.sh@693 -- # key=7e0d573bc0a290416ba9ec839447bc33c5049422d01fb74c 00:18:48.881 14:23:30 -- nvmf/common.sh@693 -- # digest=0 00:18:48.881 14:23:30 -- nvmf/common.sh@694 -- # python - 00:18:48.881 14:23:30 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.ZrV 00:18:48.881 14:23:30 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.ZrV 00:18:48.881 14:23:30 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.ZrV 00:18:48.881 14:23:30 -- host/auth.sh@83 -- # gen_key sha256 32 00:18:48.881 14:23:30 -- host/auth.sh@53 -- # local digest len file key 00:18:48.881 14:23:30 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.881 14:23:30 -- host/auth.sh@54 -- # local -A digests 00:18:48.881 14:23:30 -- host/auth.sh@56 -- # digest=sha256 00:18:48.881 14:23:30 -- host/auth.sh@56 -- # len=32 00:18:48.881 14:23:30 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:48.881 14:23:30 -- host/auth.sh@57 -- # key=01169562eb03980acf93848b5de46c5c 00:18:48.881 14:23:30 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:18:48.881 14:23:30 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.qkX 00:18:48.881 14:23:30 -- host/auth.sh@59 -- # format_dhchap_key 01169562eb03980acf93848b5de46c5c 1 00:18:48.881 14:23:30 -- nvmf/common.sh@708 -- # format_key DHHC-1 01169562eb03980acf93848b5de46c5c 1 00:18:48.881 14:23:30 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:48.881 14:23:30 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:18:48.881 14:23:30 -- nvmf/common.sh@693 -- # key=01169562eb03980acf93848b5de46c5c 00:18:48.881 14:23:30 -- nvmf/common.sh@693 -- # digest=1 00:18:48.881 14:23:30 -- nvmf/common.sh@694 -- # python - 00:18:48.881 14:23:30 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.qkX 00:18:48.881 14:23:30 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.qkX 00:18:48.881 14:23:30 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.qkX 00:18:48.881 14:23:30 -- host/auth.sh@84 -- # gen_key sha384 48 00:18:48.881 14:23:30 -- host/auth.sh@53 -- # local digest len file key 00:18:48.881 14:23:30 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:48.881 14:23:30 -- host/auth.sh@54 -- # local -A digests 00:18:48.881 14:23:30 -- host/auth.sh@56 -- # digest=sha384 00:18:48.881 14:23:30 -- host/auth.sh@56 -- # len=48 00:18:48.881 14:23:30 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:48.881 14:23:30 -- host/auth.sh@57 -- # key=a61694055165f2c58c0dbc7d2492d0467120fe18b5a158e4 00:18:48.881 14:23:30 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:18:48.881 14:23:30 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.6HJ 00:18:48.881 14:23:30 -- host/auth.sh@59 -- # format_dhchap_key a61694055165f2c58c0dbc7d2492d0467120fe18b5a158e4 2 00:18:48.881 14:23:30 -- nvmf/common.sh@708 -- # format_key DHHC-1 a61694055165f2c58c0dbc7d2492d0467120fe18b5a158e4 2 00:18:48.881 14:23:30 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:48.881 14:23:30 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:18:48.881 14:23:30 -- nvmf/common.sh@693 -- # key=a61694055165f2c58c0dbc7d2492d0467120fe18b5a158e4 00:18:48.881 14:23:30 -- nvmf/common.sh@693 -- # digest=2 00:18:48.881 14:23:30 -- nvmf/common.sh@694 -- # python - 00:18:49.140 14:23:30 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.6HJ 00:18:49.140 14:23:30 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.6HJ 00:18:49.140 14:23:30 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.6HJ 00:18:49.140 14:23:30 -- host/auth.sh@85 -- # gen_key sha512 64 00:18:49.140 14:23:30 -- host/auth.sh@53 -- # local digest len file key 00:18:49.140 14:23:30 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:49.140 14:23:30 -- host/auth.sh@54 -- # local -A digests 00:18:49.140 14:23:30 -- host/auth.sh@56 -- # digest=sha512 00:18:49.140 14:23:30 -- host/auth.sh@56 -- # len=64 00:18:49.140 14:23:30 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:49.140 14:23:30 -- host/auth.sh@57 -- # key=f21651221c1ec60205608ac7fb92c3f2ad4333b3fd8d3792ed3ca1f138e8d868 00:18:49.140 14:23:30 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:18:49.140 14:23:30 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.2i5 00:18:49.140 14:23:30 -- host/auth.sh@59 -- # format_dhchap_key f21651221c1ec60205608ac7fb92c3f2ad4333b3fd8d3792ed3ca1f138e8d868 3 00:18:49.140 14:23:30 -- nvmf/common.sh@708 -- # format_key DHHC-1 f21651221c1ec60205608ac7fb92c3f2ad4333b3fd8d3792ed3ca1f138e8d868 3 00:18:49.140 14:23:30 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:49.140 14:23:30 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:18:49.140 14:23:30 -- nvmf/common.sh@693 -- # key=f21651221c1ec60205608ac7fb92c3f2ad4333b3fd8d3792ed3ca1f138e8d868 00:18:49.140 14:23:30 -- nvmf/common.sh@693 -- # digest=3 00:18:49.140 14:23:30 -- nvmf/common.sh@694 -- # python - 00:18:49.140 14:23:30 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.2i5 00:18:49.140 14:23:30 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.2i5 00:18:49.140 14:23:30 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.2i5 00:18:49.140 14:23:30 -- host/auth.sh@87 -- # waitforlisten 3197652 00:18:49.140 14:23:30 -- common/autotest_common.sh@817 -- # '[' -z 3197652 ']' 00:18:49.140 14:23:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.140 14:23:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:49.140 14:23:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.140 14:23:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:49.140 14:23:30 -- common/autotest_common.sh@10 -- # set +x 00:18:49.399 14:23:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:49.399 14:23:30 -- common/autotest_common.sh@850 -- # return 0 00:18:49.399 14:23:30 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:18:49.399 14:23:30 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rdu 00:18:49.399 14:23:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.399 14:23:30 -- common/autotest_common.sh@10 -- # set +x 00:18:49.399 14:23:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.399 14:23:30 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:18:49.399 14:23:30 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ZrV 00:18:49.399 14:23:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.399 14:23:30 -- common/autotest_common.sh@10 -- # set +x 00:18:49.399 14:23:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.399 14:23:30 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:18:49.399 14:23:30 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.qkX 00:18:49.399 14:23:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.399 14:23:30 -- common/autotest_common.sh@10 -- # set +x 00:18:49.399 14:23:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.399 14:23:30 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:18:49.399 14:23:30 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.6HJ 00:18:49.399 14:23:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.399 14:23:30 -- common/autotest_common.sh@10 -- # set +x 00:18:49.399 14:23:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.399 14:23:30 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:18:49.399 14:23:30 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.2i5 00:18:49.399 14:23:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.399 14:23:30 -- common/autotest_common.sh@10 -- # set +x 00:18:49.399 14:23:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.399 14:23:30 -- host/auth.sh@92 -- # nvmet_auth_init 00:18:49.399 14:23:30 -- host/auth.sh@35 -- # get_main_ns_ip 00:18:49.399 14:23:30 -- nvmf/common.sh@717 -- # local ip 00:18:49.399 14:23:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:49.399 14:23:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:49.399 14:23:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.399 14:23:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.399 14:23:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:49.399 14:23:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.399 14:23:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:49.399 14:23:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:49.399 14:23:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:49.399 14:23:30 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:18:49.399 14:23:30 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:18:49.399 14:23:30 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:18:49.399 14:23:30 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:49.399 14:23:30 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:49.399 14:23:30 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:49.399 14:23:30 -- nvmf/common.sh@628 -- # local block nvme 00:18:49.399 14:23:30 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:18:49.399 14:23:30 -- nvmf/common.sh@631 -- # modprobe nvmet 00:18:49.399 14:23:30 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:49.399 14:23:30 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:18:50.335 Waiting for block devices as requested 00:18:50.335 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:18:50.335 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:18:50.335 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:18:50.593 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:18:50.593 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:18:50.593 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:18:50.850 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:18:50.850 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:18:50.850 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:18:50.850 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:18:51.108 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:18:51.108 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:18:51.108 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:18:51.108 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:18:51.366 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:18:51.366 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:18:51.367 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:18:51.932 14:23:33 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:18:51.932 14:23:33 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:51.932 14:23:33 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:18:51.933 14:23:33 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:18:51.933 14:23:33 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:51.933 14:23:33 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:18:51.933 14:23:33 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:18:51.933 14:23:33 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:51.933 14:23:33 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:18:51.933 No valid GPT data, bailing 00:18:51.933 14:23:33 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:51.933 14:23:33 -- scripts/common.sh@391 -- # pt= 00:18:51.933 14:23:33 -- scripts/common.sh@392 -- # return 1 00:18:51.933 14:23:33 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:18:51.933 14:23:33 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:18:51.933 14:23:33 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:51.933 14:23:33 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:51.933 14:23:33 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:51.933 14:23:33 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:18:51.933 14:23:33 -- nvmf/common.sh@656 -- # echo 1 00:18:51.933 14:23:33 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:18:51.933 14:23:33 -- nvmf/common.sh@658 -- # echo 1 00:18:51.933 14:23:33 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:18:51.933 14:23:33 -- nvmf/common.sh@661 -- # echo tcp 00:18:51.933 14:23:33 -- nvmf/common.sh@662 -- # echo 4420 00:18:51.933 14:23:33 -- nvmf/common.sh@663 -- # echo ipv4 00:18:51.933 14:23:33 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:51.933 14:23:33 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:18:51.933 00:18:51.933 Discovery Log Number of Records 2, Generation counter 2 00:18:51.933 =====Discovery Log Entry 0====== 00:18:51.933 trtype: tcp 00:18:51.933 adrfam: ipv4 00:18:51.933 subtype: current discovery subsystem 00:18:51.933 treq: not specified, sq flow control disable supported 00:18:51.933 portid: 1 00:18:51.933 trsvcid: 4420 00:18:51.933 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:51.933 traddr: 10.0.0.1 00:18:51.933 eflags: none 00:18:51.933 sectype: none 00:18:51.933 =====Discovery Log Entry 1====== 00:18:51.933 trtype: tcp 00:18:51.933 adrfam: ipv4 00:18:51.933 subtype: nvme subsystem 00:18:51.933 treq: not specified, sq flow control disable supported 00:18:51.933 portid: 1 00:18:51.933 trsvcid: 4420 00:18:51.933 subnqn: nqn.2024-02.io.spdk:cnode0 00:18:51.933 traddr: 10.0.0.1 00:18:51.933 eflags: none 00:18:51.933 sectype: none 00:18:51.933 14:23:33 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:51.933 14:23:33 -- host/auth.sh@37 -- # echo 0 00:18:51.933 14:23:33 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:51.933 14:23:33 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:51.933 14:23:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:51.933 14:23:33 -- host/auth.sh@44 -- # digest=sha256 00:18:51.933 14:23:33 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:51.933 14:23:33 -- host/auth.sh@44 -- # keyid=1 00:18:51.933 14:23:33 -- host/auth.sh@45 -- # key=DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:18:51.933 14:23:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:51.933 14:23:33 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:51.933 14:23:33 -- host/auth.sh@49 -- # echo DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:18:51.933 14:23:33 -- host/auth.sh@100 -- # IFS=, 00:18:51.933 14:23:33 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:18:51.933 14:23:33 -- host/auth.sh@100 -- # IFS=, 00:18:51.933 14:23:33 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:51.933 14:23:33 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:18:51.933 14:23:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:51.933 14:23:33 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:18:51.933 14:23:33 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:51.933 14:23:33 -- host/auth.sh@68 -- # keyid=1 00:18:51.933 14:23:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:51.933 14:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.933 14:23:33 -- common/autotest_common.sh@10 -- # set +x 00:18:51.933 14:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.933 14:23:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:51.933 14:23:33 -- nvmf/common.sh@717 -- # local ip 00:18:51.933 14:23:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:51.933 14:23:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:51.933 14:23:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.933 14:23:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.933 14:23:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:51.933 14:23:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.933 14:23:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:51.933 14:23:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:51.933 14:23:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:51.933 14:23:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:51.933 14:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.933 14:23:33 -- common/autotest_common.sh@10 -- # set +x 00:18:51.933 nvme0n1 00:18:51.933 14:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.933 14:23:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.933 14:23:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:51.933 14:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.933 14:23:33 -- common/autotest_common.sh@10 -- # set +x 00:18:51.933 14:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.191 14:23:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.191 14:23:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.191 14:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.191 14:23:33 -- common/autotest_common.sh@10 -- # set +x 00:18:52.191 14:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.191 14:23:33 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:18:52.191 14:23:33 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.191 14:23:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:52.191 14:23:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:18:52.191 14:23:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:52.191 14:23:33 -- host/auth.sh@44 -- # digest=sha256 00:18:52.191 14:23:33 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:52.191 14:23:33 -- host/auth.sh@44 -- # keyid=0 00:18:52.191 14:23:33 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:18:52.191 14:23:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:52.191 14:23:33 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:52.191 14:23:33 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:18:52.191 14:23:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:18:52.191 14:23:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:52.191 14:23:33 -- host/auth.sh@68 -- # digest=sha256 00:18:52.191 14:23:33 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:52.191 14:23:33 -- host/auth.sh@68 -- # keyid=0 00:18:52.191 14:23:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:52.191 14:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.191 14:23:33 -- common/autotest_common.sh@10 -- # set +x 00:18:52.191 14:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.191 14:23:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:52.191 14:23:33 -- nvmf/common.sh@717 -- # local ip 00:18:52.191 14:23:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:52.191 14:23:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:52.191 14:23:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.191 14:23:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.191 14:23:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:52.191 14:23:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.191 14:23:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:52.191 14:23:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:52.191 14:23:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:52.191 14:23:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:18:52.191 14:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.191 14:23:33 -- common/autotest_common.sh@10 -- # set +x 00:18:52.191 nvme0n1 00:18:52.191 14:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.191 14:23:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.191 14:23:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:52.191 14:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.191 14:23:33 -- common/autotest_common.sh@10 -- # set +x 00:18:52.191 14:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.191 14:23:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.191 14:23:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.191 14:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.191 14:23:33 -- common/autotest_common.sh@10 -- # set +x 00:18:52.191 14:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.191 14:23:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:52.191 14:23:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:52.191 14:23:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:52.191 14:23:33 -- host/auth.sh@44 -- # digest=sha256 00:18:52.191 14:23:33 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:52.191 14:23:33 -- host/auth.sh@44 -- # keyid=1 00:18:52.191 14:23:33 -- host/auth.sh@45 -- # key=DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:18:52.191 14:23:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:52.191 14:23:33 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:52.191 14:23:33 -- host/auth.sh@49 -- # echo DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:18:52.191 14:23:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:18:52.191 14:23:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:52.191 14:23:33 -- host/auth.sh@68 -- # digest=sha256 00:18:52.191 14:23:33 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:52.191 14:23:33 -- host/auth.sh@68 -- # keyid=1 00:18:52.191 14:23:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:52.191 14:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.191 14:23:33 -- common/autotest_common.sh@10 -- # set +x 00:18:52.191 14:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.191 14:23:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:52.191 14:23:33 -- nvmf/common.sh@717 -- # local ip 00:18:52.192 14:23:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:52.192 14:23:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:52.192 14:23:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.192 14:23:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.192 14:23:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:52.192 14:23:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.192 14:23:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:52.192 14:23:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:52.192 14:23:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:52.192 14:23:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:52.192 14:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.192 14:23:33 -- common/autotest_common.sh@10 -- # set +x 00:18:52.450 nvme0n1 00:18:52.450 14:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.450 14:23:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.450 14:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.450 14:23:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:52.450 14:23:33 -- common/autotest_common.sh@10 -- # set +x 00:18:52.450 14:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.450 14:23:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.450 14:23:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.450 14:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.450 14:23:33 -- common/autotest_common.sh@10 -- # set +x 00:18:52.450 14:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.450 14:23:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:52.450 14:23:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:52.450 14:23:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:52.450 14:23:33 -- host/auth.sh@44 -- # digest=sha256 00:18:52.450 14:23:33 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:52.450 14:23:33 -- host/auth.sh@44 -- # keyid=2 00:18:52.450 14:23:33 -- host/auth.sh@45 -- # key=DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:18:52.450 14:23:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:52.450 14:23:33 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:52.450 14:23:33 -- host/auth.sh@49 -- # echo DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:18:52.450 14:23:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:18:52.450 14:23:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:52.450 14:23:33 -- host/auth.sh@68 -- # digest=sha256 00:18:52.450 14:23:33 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:52.450 14:23:33 -- host/auth.sh@68 -- # keyid=2 00:18:52.450 14:23:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:52.450 14:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.450 14:23:33 -- common/autotest_common.sh@10 -- # set +x 00:18:52.450 14:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.450 14:23:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:52.450 14:23:33 -- nvmf/common.sh@717 -- # local ip 00:18:52.450 14:23:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:52.450 14:23:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:52.450 14:23:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.450 14:23:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.450 14:23:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:52.450 14:23:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.450 14:23:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:52.450 14:23:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:52.450 14:23:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:52.450 14:23:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:52.450 14:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.450 14:23:33 -- common/autotest_common.sh@10 -- # set +x 00:18:52.709 nvme0n1 00:18:52.709 14:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.709 14:23:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.709 14:23:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:52.709 14:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.709 14:23:34 -- common/autotest_common.sh@10 -- # set +x 00:18:52.709 14:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.709 14:23:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.709 14:23:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.709 14:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.709 14:23:34 -- common/autotest_common.sh@10 -- # set +x 00:18:52.709 14:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.709 14:23:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:52.709 14:23:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:18:52.709 14:23:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:52.709 14:23:34 -- host/auth.sh@44 -- # digest=sha256 00:18:52.709 14:23:34 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:52.709 14:23:34 -- host/auth.sh@44 -- # keyid=3 00:18:52.709 14:23:34 -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:18:52.709 14:23:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:52.709 14:23:34 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:52.709 14:23:34 -- host/auth.sh@49 -- # echo DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:18:52.709 14:23:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:18:52.709 14:23:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:52.709 14:23:34 -- host/auth.sh@68 -- # digest=sha256 00:18:52.709 14:23:34 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:52.709 14:23:34 -- host/auth.sh@68 -- # keyid=3 00:18:52.709 14:23:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:52.709 14:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.709 14:23:34 -- common/autotest_common.sh@10 -- # set +x 00:18:52.709 14:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.709 14:23:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:52.709 14:23:34 -- nvmf/common.sh@717 -- # local ip 00:18:52.709 14:23:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:52.709 14:23:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:52.709 14:23:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.709 14:23:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.709 14:23:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:52.709 14:23:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.709 14:23:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:52.709 14:23:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:52.709 14:23:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:52.709 14:23:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:18:52.709 14:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.709 14:23:34 -- common/autotest_common.sh@10 -- # set +x 00:18:52.709 nvme0n1 00:18:52.709 14:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.709 14:23:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.709 14:23:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:52.709 14:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.709 14:23:34 -- common/autotest_common.sh@10 -- # set +x 00:18:52.709 14:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.709 14:23:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.709 14:23:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.709 14:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.709 14:23:34 -- common/autotest_common.sh@10 -- # set +x 00:18:52.968 14:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.968 14:23:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:52.968 14:23:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:18:52.968 14:23:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:52.968 14:23:34 -- host/auth.sh@44 -- # digest=sha256 00:18:52.968 14:23:34 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:52.968 14:23:34 -- host/auth.sh@44 -- # keyid=4 00:18:52.968 14:23:34 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:18:52.968 14:23:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:52.968 14:23:34 -- host/auth.sh@48 -- # echo ffdhe2048 00:18:52.968 14:23:34 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:18:52.968 14:23:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:18:52.968 14:23:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:52.968 14:23:34 -- host/auth.sh@68 -- # digest=sha256 00:18:52.968 14:23:34 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:18:52.968 14:23:34 -- host/auth.sh@68 -- # keyid=4 00:18:52.968 14:23:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:52.968 14:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.968 14:23:34 -- common/autotest_common.sh@10 -- # set +x 00:18:52.968 14:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.968 14:23:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:52.968 14:23:34 -- nvmf/common.sh@717 -- # local ip 00:18:52.968 14:23:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:52.968 14:23:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:52.968 14:23:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.968 14:23:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.968 14:23:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:52.968 14:23:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.968 14:23:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:52.968 14:23:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:52.968 14:23:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:52.968 14:23:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:52.968 14:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.968 14:23:34 -- common/autotest_common.sh@10 -- # set +x 00:18:52.968 nvme0n1 00:18:52.968 14:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.968 14:23:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.968 14:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.968 14:23:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:52.968 14:23:34 -- common/autotest_common.sh@10 -- # set +x 00:18:52.968 14:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.968 14:23:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.968 14:23:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.968 14:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.968 14:23:34 -- common/autotest_common.sh@10 -- # set +x 00:18:52.968 14:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.968 14:23:34 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.968 14:23:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:52.968 14:23:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:18:52.968 14:23:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:52.968 14:23:34 -- host/auth.sh@44 -- # digest=sha256 00:18:52.968 14:23:34 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:52.968 14:23:34 -- host/auth.sh@44 -- # keyid=0 00:18:52.968 14:23:34 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:18:52.968 14:23:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:52.968 14:23:34 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:52.968 14:23:34 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:18:52.968 14:23:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:18:52.968 14:23:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:52.968 14:23:34 -- host/auth.sh@68 -- # digest=sha256 00:18:52.968 14:23:34 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:52.968 14:23:34 -- host/auth.sh@68 -- # keyid=0 00:18:52.968 14:23:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:52.968 14:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.968 14:23:34 -- common/autotest_common.sh@10 -- # set +x 00:18:52.968 14:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.968 14:23:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:52.968 14:23:34 -- nvmf/common.sh@717 -- # local ip 00:18:52.968 14:23:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:52.968 14:23:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:52.968 14:23:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.968 14:23:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.968 14:23:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:52.968 14:23:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.968 14:23:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:52.968 14:23:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:52.968 14:23:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:52.968 14:23:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:18:52.968 14:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.968 14:23:34 -- common/autotest_common.sh@10 -- # set +x 00:18:53.227 nvme0n1 00:18:53.227 14:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.227 14:23:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.227 14:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.227 14:23:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:53.227 14:23:34 -- common/autotest_common.sh@10 -- # set +x 00:18:53.227 14:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.227 14:23:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.227 14:23:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.227 14:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.227 14:23:34 -- common/autotest_common.sh@10 -- # set +x 00:18:53.227 14:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.227 14:23:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:53.227 14:23:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:18:53.227 14:23:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:53.227 14:23:34 -- host/auth.sh@44 -- # digest=sha256 00:18:53.227 14:23:34 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:53.227 14:23:34 -- host/auth.sh@44 -- # keyid=1 00:18:53.227 14:23:34 -- host/auth.sh@45 -- # key=DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:18:53.227 14:23:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:53.227 14:23:34 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:53.227 14:23:34 -- host/auth.sh@49 -- # echo DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:18:53.227 14:23:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:18:53.227 14:23:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:53.227 14:23:34 -- host/auth.sh@68 -- # digest=sha256 00:18:53.227 14:23:34 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:53.227 14:23:34 -- host/auth.sh@68 -- # keyid=1 00:18:53.227 14:23:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.227 14:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.227 14:23:34 -- common/autotest_common.sh@10 -- # set +x 00:18:53.227 14:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.227 14:23:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:53.227 14:23:34 -- nvmf/common.sh@717 -- # local ip 00:18:53.227 14:23:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:53.227 14:23:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:53.227 14:23:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.227 14:23:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.227 14:23:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:53.227 14:23:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.227 14:23:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:53.227 14:23:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:53.228 14:23:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:53.228 14:23:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:53.228 14:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.228 14:23:34 -- common/autotest_common.sh@10 -- # set +x 00:18:53.485 nvme0n1 00:18:53.485 14:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.485 14:23:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.485 14:23:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:53.485 14:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.485 14:23:34 -- common/autotest_common.sh@10 -- # set +x 00:18:53.485 14:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.485 14:23:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.485 14:23:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.485 14:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.485 14:23:34 -- common/autotest_common.sh@10 -- # set +x 00:18:53.485 14:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.485 14:23:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:53.485 14:23:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:18:53.485 14:23:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:53.485 14:23:34 -- host/auth.sh@44 -- # digest=sha256 00:18:53.485 14:23:34 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:53.485 14:23:34 -- host/auth.sh@44 -- # keyid=2 00:18:53.485 14:23:34 -- host/auth.sh@45 -- # key=DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:18:53.485 14:23:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:53.485 14:23:34 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:53.485 14:23:34 -- host/auth.sh@49 -- # echo DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:18:53.485 14:23:34 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:18:53.485 14:23:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:53.485 14:23:34 -- host/auth.sh@68 -- # digest=sha256 00:18:53.485 14:23:34 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:53.485 14:23:34 -- host/auth.sh@68 -- # keyid=2 00:18:53.485 14:23:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.485 14:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.485 14:23:34 -- common/autotest_common.sh@10 -- # set +x 00:18:53.485 14:23:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.485 14:23:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:53.485 14:23:34 -- nvmf/common.sh@717 -- # local ip 00:18:53.485 14:23:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:53.485 14:23:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:53.485 14:23:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.485 14:23:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.485 14:23:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:53.485 14:23:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.485 14:23:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:53.485 14:23:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:53.485 14:23:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:53.485 14:23:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:53.485 14:23:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.485 14:23:34 -- common/autotest_common.sh@10 -- # set +x 00:18:53.744 nvme0n1 00:18:53.744 14:23:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.744 14:23:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.744 14:23:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:53.744 14:23:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.744 14:23:35 -- common/autotest_common.sh@10 -- # set +x 00:18:53.744 14:23:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.744 14:23:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.744 14:23:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.744 14:23:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.744 14:23:35 -- common/autotest_common.sh@10 -- # set +x 00:18:53.744 14:23:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.744 14:23:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:53.744 14:23:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:18:53.744 14:23:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:53.744 14:23:35 -- host/auth.sh@44 -- # digest=sha256 00:18:53.744 14:23:35 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:53.744 14:23:35 -- host/auth.sh@44 -- # keyid=3 00:18:53.744 14:23:35 -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:18:53.744 14:23:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:53.744 14:23:35 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:53.744 14:23:35 -- host/auth.sh@49 -- # echo DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:18:53.744 14:23:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:18:53.744 14:23:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:53.744 14:23:35 -- host/auth.sh@68 -- # digest=sha256 00:18:53.744 14:23:35 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:53.744 14:23:35 -- host/auth.sh@68 -- # keyid=3 00:18:53.744 14:23:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.744 14:23:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.744 14:23:35 -- common/autotest_common.sh@10 -- # set +x 00:18:53.744 14:23:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:53.744 14:23:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:53.744 14:23:35 -- nvmf/common.sh@717 -- # local ip 00:18:53.744 14:23:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:53.744 14:23:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:53.744 14:23:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.744 14:23:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.744 14:23:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:53.744 14:23:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.744 14:23:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:53.744 14:23:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:53.744 14:23:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:53.744 14:23:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:18:53.744 14:23:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:53.744 14:23:35 -- common/autotest_common.sh@10 -- # set +x 00:18:54.002 nvme0n1 00:18:54.002 14:23:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.002 14:23:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.002 14:23:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:54.002 14:23:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.002 14:23:35 -- common/autotest_common.sh@10 -- # set +x 00:18:54.002 14:23:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.002 14:23:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.002 14:23:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.002 14:23:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.002 14:23:35 -- common/autotest_common.sh@10 -- # set +x 00:18:54.002 14:23:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.002 14:23:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:54.002 14:23:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:18:54.002 14:23:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:54.002 14:23:35 -- host/auth.sh@44 -- # digest=sha256 00:18:54.002 14:23:35 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:54.002 14:23:35 -- host/auth.sh@44 -- # keyid=4 00:18:54.002 14:23:35 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:18:54.002 14:23:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:54.002 14:23:35 -- host/auth.sh@48 -- # echo ffdhe3072 00:18:54.002 14:23:35 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:18:54.002 14:23:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:18:54.002 14:23:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:54.002 14:23:35 -- host/auth.sh@68 -- # digest=sha256 00:18:54.002 14:23:35 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:18:54.002 14:23:35 -- host/auth.sh@68 -- # keyid=4 00:18:54.002 14:23:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:54.002 14:23:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.002 14:23:35 -- common/autotest_common.sh@10 -- # set +x 00:18:54.002 14:23:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.002 14:23:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:54.002 14:23:35 -- nvmf/common.sh@717 -- # local ip 00:18:54.002 14:23:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:54.002 14:23:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:54.002 14:23:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.002 14:23:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.002 14:23:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:54.002 14:23:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.002 14:23:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:54.002 14:23:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:54.002 14:23:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:54.002 14:23:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:54.002 14:23:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.002 14:23:35 -- common/autotest_common.sh@10 -- # set +x 00:18:54.002 nvme0n1 00:18:54.003 14:23:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.003 14:23:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.003 14:23:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:54.003 14:23:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.003 14:23:35 -- common/autotest_common.sh@10 -- # set +x 00:18:54.261 14:23:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.261 14:23:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.261 14:23:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.261 14:23:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.261 14:23:35 -- common/autotest_common.sh@10 -- # set +x 00:18:54.261 14:23:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.261 14:23:35 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:18:54.261 14:23:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:54.261 14:23:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:18:54.261 14:23:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:54.261 14:23:35 -- host/auth.sh@44 -- # digest=sha256 00:18:54.261 14:23:35 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:54.261 14:23:35 -- host/auth.sh@44 -- # keyid=0 00:18:54.261 14:23:35 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:18:54.261 14:23:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:54.261 14:23:35 -- host/auth.sh@48 -- # echo ffdhe4096 00:18:54.261 14:23:35 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:18:54.261 14:23:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:18:54.261 14:23:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:54.261 14:23:35 -- host/auth.sh@68 -- # digest=sha256 00:18:54.261 14:23:35 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:18:54.261 14:23:35 -- host/auth.sh@68 -- # keyid=0 00:18:54.261 14:23:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:54.261 14:23:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.261 14:23:35 -- common/autotest_common.sh@10 -- # set +x 00:18:54.261 14:23:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.261 14:23:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:54.261 14:23:35 -- nvmf/common.sh@717 -- # local ip 00:18:54.261 14:23:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:54.261 14:23:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:54.261 14:23:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.261 14:23:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.261 14:23:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:54.261 14:23:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.261 14:23:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:54.261 14:23:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:54.261 14:23:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:54.261 14:23:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:18:54.261 14:23:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.261 14:23:35 -- common/autotest_common.sh@10 -- # set +x 00:18:54.521 nvme0n1 00:18:54.521 14:23:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.521 14:23:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.521 14:23:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:54.521 14:23:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.521 14:23:35 -- common/autotest_common.sh@10 -- # set +x 00:18:54.521 14:23:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.521 14:23:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.521 14:23:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.521 14:23:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.521 14:23:35 -- common/autotest_common.sh@10 -- # set +x 00:18:54.521 14:23:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.521 14:23:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:54.521 14:23:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:18:54.521 14:23:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:54.521 14:23:35 -- host/auth.sh@44 -- # digest=sha256 00:18:54.521 14:23:35 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:54.521 14:23:35 -- host/auth.sh@44 -- # keyid=1 00:18:54.521 14:23:35 -- host/auth.sh@45 -- # key=DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:18:54.521 14:23:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:54.521 14:23:35 -- host/auth.sh@48 -- # echo ffdhe4096 00:18:54.521 14:23:35 -- host/auth.sh@49 -- # echo DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:18:54.521 14:23:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:18:54.521 14:23:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:54.521 14:23:35 -- host/auth.sh@68 -- # digest=sha256 00:18:54.521 14:23:35 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:18:54.521 14:23:35 -- host/auth.sh@68 -- # keyid=1 00:18:54.521 14:23:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:54.521 14:23:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.521 14:23:35 -- common/autotest_common.sh@10 -- # set +x 00:18:54.521 14:23:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.521 14:23:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:54.521 14:23:35 -- nvmf/common.sh@717 -- # local ip 00:18:54.521 14:23:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:54.521 14:23:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:54.521 14:23:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.521 14:23:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.521 14:23:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:54.521 14:23:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.521 14:23:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:54.521 14:23:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:54.521 14:23:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:54.521 14:23:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:54.521 14:23:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.521 14:23:35 -- common/autotest_common.sh@10 -- # set +x 00:18:54.780 nvme0n1 00:18:54.780 14:23:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.780 14:23:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.780 14:23:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:54.780 14:23:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.780 14:23:36 -- common/autotest_common.sh@10 -- # set +x 00:18:54.780 14:23:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.780 14:23:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.780 14:23:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.780 14:23:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.780 14:23:36 -- common/autotest_common.sh@10 -- # set +x 00:18:54.780 14:23:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.780 14:23:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:54.780 14:23:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:18:54.780 14:23:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:54.780 14:23:36 -- host/auth.sh@44 -- # digest=sha256 00:18:54.780 14:23:36 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:54.780 14:23:36 -- host/auth.sh@44 -- # keyid=2 00:18:54.780 14:23:36 -- host/auth.sh@45 -- # key=DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:18:54.780 14:23:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:54.780 14:23:36 -- host/auth.sh@48 -- # echo ffdhe4096 00:18:54.780 14:23:36 -- host/auth.sh@49 -- # echo DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:18:54.780 14:23:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:18:54.780 14:23:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:54.780 14:23:36 -- host/auth.sh@68 -- # digest=sha256 00:18:54.780 14:23:36 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:18:54.780 14:23:36 -- host/auth.sh@68 -- # keyid=2 00:18:54.780 14:23:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:54.780 14:23:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.780 14:23:36 -- common/autotest_common.sh@10 -- # set +x 00:18:54.780 14:23:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.780 14:23:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:54.780 14:23:36 -- nvmf/common.sh@717 -- # local ip 00:18:54.780 14:23:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:54.780 14:23:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:54.780 14:23:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.780 14:23:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.780 14:23:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:54.780 14:23:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.780 14:23:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:54.780 14:23:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:54.780 14:23:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:55.038 14:23:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:55.038 14:23:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.038 14:23:36 -- common/autotest_common.sh@10 -- # set +x 00:18:55.297 nvme0n1 00:18:55.297 14:23:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.297 14:23:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:55.297 14:23:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.297 14:23:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.297 14:23:36 -- common/autotest_common.sh@10 -- # set +x 00:18:55.297 14:23:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.297 14:23:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.297 14:23:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.297 14:23:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.297 14:23:36 -- common/autotest_common.sh@10 -- # set +x 00:18:55.297 14:23:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.297 14:23:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:55.297 14:23:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:18:55.297 14:23:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:55.297 14:23:36 -- host/auth.sh@44 -- # digest=sha256 00:18:55.297 14:23:36 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:55.297 14:23:36 -- host/auth.sh@44 -- # keyid=3 00:18:55.297 14:23:36 -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:18:55.297 14:23:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:55.297 14:23:36 -- host/auth.sh@48 -- # echo ffdhe4096 00:18:55.297 14:23:36 -- host/auth.sh@49 -- # echo DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:18:55.297 14:23:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:18:55.297 14:23:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:55.297 14:23:36 -- host/auth.sh@68 -- # digest=sha256 00:18:55.297 14:23:36 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:18:55.297 14:23:36 -- host/auth.sh@68 -- # keyid=3 00:18:55.297 14:23:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:55.297 14:23:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.297 14:23:36 -- common/autotest_common.sh@10 -- # set +x 00:18:55.297 14:23:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.297 14:23:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:55.297 14:23:36 -- nvmf/common.sh@717 -- # local ip 00:18:55.297 14:23:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:55.297 14:23:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:55.297 14:23:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.297 14:23:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.297 14:23:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:55.297 14:23:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.297 14:23:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:55.297 14:23:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:55.297 14:23:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:55.297 14:23:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:18:55.297 14:23:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.297 14:23:36 -- common/autotest_common.sh@10 -- # set +x 00:18:55.556 nvme0n1 00:18:55.556 14:23:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.556 14:23:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:55.556 14:23:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.556 14:23:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.556 14:23:37 -- common/autotest_common.sh@10 -- # set +x 00:18:55.556 14:23:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.556 14:23:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.556 14:23:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.556 14:23:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.556 14:23:37 -- common/autotest_common.sh@10 -- # set +x 00:18:55.556 14:23:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.556 14:23:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:55.556 14:23:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:18:55.556 14:23:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:55.556 14:23:37 -- host/auth.sh@44 -- # digest=sha256 00:18:55.556 14:23:37 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:55.556 14:23:37 -- host/auth.sh@44 -- # keyid=4 00:18:55.556 14:23:37 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:18:55.556 14:23:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:55.556 14:23:37 -- host/auth.sh@48 -- # echo ffdhe4096 00:18:55.556 14:23:37 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:18:55.556 14:23:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:18:55.556 14:23:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:55.556 14:23:37 -- host/auth.sh@68 -- # digest=sha256 00:18:55.556 14:23:37 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:18:55.556 14:23:37 -- host/auth.sh@68 -- # keyid=4 00:18:55.556 14:23:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:55.556 14:23:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.556 14:23:37 -- common/autotest_common.sh@10 -- # set +x 00:18:55.556 14:23:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.556 14:23:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:55.556 14:23:37 -- nvmf/common.sh@717 -- # local ip 00:18:55.556 14:23:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:55.556 14:23:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:55.556 14:23:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.556 14:23:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.556 14:23:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:55.556 14:23:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.556 14:23:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:55.556 14:23:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:55.556 14:23:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:55.556 14:23:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:55.556 14:23:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.556 14:23:37 -- common/autotest_common.sh@10 -- # set +x 00:18:55.815 nvme0n1 00:18:55.815 14:23:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.815 14:23:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:55.815 14:23:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.815 14:23:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.815 14:23:37 -- common/autotest_common.sh@10 -- # set +x 00:18:55.815 14:23:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.815 14:23:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.815 14:23:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.815 14:23:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.815 14:23:37 -- common/autotest_common.sh@10 -- # set +x 00:18:55.815 14:23:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.815 14:23:37 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.815 14:23:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:55.815 14:23:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:18:55.815 14:23:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:55.815 14:23:37 -- host/auth.sh@44 -- # digest=sha256 00:18:55.815 14:23:37 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:55.815 14:23:37 -- host/auth.sh@44 -- # keyid=0 00:18:55.815 14:23:37 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:18:55.815 14:23:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:55.815 14:23:37 -- host/auth.sh@48 -- # echo ffdhe6144 00:18:55.815 14:23:37 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:18:55.815 14:23:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:18:55.815 14:23:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:55.815 14:23:37 -- host/auth.sh@68 -- # digest=sha256 00:18:55.815 14:23:37 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:18:55.815 14:23:37 -- host/auth.sh@68 -- # keyid=0 00:18:55.815 14:23:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:55.815 14:23:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.815 14:23:37 -- common/autotest_common.sh@10 -- # set +x 00:18:55.815 14:23:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.075 14:23:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:56.075 14:23:37 -- nvmf/common.sh@717 -- # local ip 00:18:56.075 14:23:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:56.075 14:23:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:56.075 14:23:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.075 14:23:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.075 14:23:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:56.075 14:23:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.075 14:23:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:56.075 14:23:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:56.075 14:23:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:56.075 14:23:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:18:56.075 14:23:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.075 14:23:37 -- common/autotest_common.sh@10 -- # set +x 00:18:56.640 nvme0n1 00:18:56.640 14:23:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.640 14:23:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.640 14:23:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.640 14:23:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:56.640 14:23:37 -- common/autotest_common.sh@10 -- # set +x 00:18:56.640 14:23:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.640 14:23:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.640 14:23:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.640 14:23:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.640 14:23:38 -- common/autotest_common.sh@10 -- # set +x 00:18:56.640 14:23:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.640 14:23:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:56.640 14:23:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:18:56.640 14:23:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:56.640 14:23:38 -- host/auth.sh@44 -- # digest=sha256 00:18:56.640 14:23:38 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:56.640 14:23:38 -- host/auth.sh@44 -- # keyid=1 00:18:56.640 14:23:38 -- host/auth.sh@45 -- # key=DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:18:56.640 14:23:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:56.640 14:23:38 -- host/auth.sh@48 -- # echo ffdhe6144 00:18:56.640 14:23:38 -- host/auth.sh@49 -- # echo DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:18:56.640 14:23:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:18:56.640 14:23:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:56.640 14:23:38 -- host/auth.sh@68 -- # digest=sha256 00:18:56.640 14:23:38 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:18:56.640 14:23:38 -- host/auth.sh@68 -- # keyid=1 00:18:56.640 14:23:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:56.640 14:23:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.640 14:23:38 -- common/autotest_common.sh@10 -- # set +x 00:18:56.640 14:23:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.640 14:23:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:56.640 14:23:38 -- nvmf/common.sh@717 -- # local ip 00:18:56.640 14:23:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:56.640 14:23:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:56.640 14:23:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.640 14:23:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.640 14:23:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:56.640 14:23:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.640 14:23:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:56.640 14:23:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:56.640 14:23:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:56.641 14:23:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:18:56.641 14:23:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.641 14:23:38 -- common/autotest_common.sh@10 -- # set +x 00:18:57.206 nvme0n1 00:18:57.206 14:23:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.206 14:23:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:57.206 14:23:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.206 14:23:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.206 14:23:38 -- common/autotest_common.sh@10 -- # set +x 00:18:57.206 14:23:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.206 14:23:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.206 14:23:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.206 14:23:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.206 14:23:38 -- common/autotest_common.sh@10 -- # set +x 00:18:57.206 14:23:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.206 14:23:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:57.206 14:23:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:18:57.206 14:23:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:57.206 14:23:38 -- host/auth.sh@44 -- # digest=sha256 00:18:57.206 14:23:38 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:57.206 14:23:38 -- host/auth.sh@44 -- # keyid=2 00:18:57.206 14:23:38 -- host/auth.sh@45 -- # key=DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:18:57.206 14:23:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:57.206 14:23:38 -- host/auth.sh@48 -- # echo ffdhe6144 00:18:57.206 14:23:38 -- host/auth.sh@49 -- # echo DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:18:57.206 14:23:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:18:57.206 14:23:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:57.206 14:23:38 -- host/auth.sh@68 -- # digest=sha256 00:18:57.206 14:23:38 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:18:57.206 14:23:38 -- host/auth.sh@68 -- # keyid=2 00:18:57.206 14:23:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:57.206 14:23:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.206 14:23:38 -- common/autotest_common.sh@10 -- # set +x 00:18:57.206 14:23:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.206 14:23:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:57.206 14:23:38 -- nvmf/common.sh@717 -- # local ip 00:18:57.206 14:23:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:57.206 14:23:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:57.206 14:23:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.206 14:23:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.206 14:23:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:57.206 14:23:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.206 14:23:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:57.206 14:23:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:57.206 14:23:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:57.206 14:23:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:57.206 14:23:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.206 14:23:38 -- common/autotest_common.sh@10 -- # set +x 00:18:57.771 nvme0n1 00:18:57.771 14:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.771 14:23:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.771 14:23:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:57.771 14:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.771 14:23:39 -- common/autotest_common.sh@10 -- # set +x 00:18:57.771 14:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.771 14:23:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.771 14:23:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.771 14:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.771 14:23:39 -- common/autotest_common.sh@10 -- # set +x 00:18:57.771 14:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.771 14:23:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:57.771 14:23:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:18:57.771 14:23:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:57.771 14:23:39 -- host/auth.sh@44 -- # digest=sha256 00:18:57.771 14:23:39 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:57.771 14:23:39 -- host/auth.sh@44 -- # keyid=3 00:18:57.771 14:23:39 -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:18:57.771 14:23:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:57.771 14:23:39 -- host/auth.sh@48 -- # echo ffdhe6144 00:18:57.771 14:23:39 -- host/auth.sh@49 -- # echo DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:18:57.771 14:23:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:18:57.771 14:23:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:57.771 14:23:39 -- host/auth.sh@68 -- # digest=sha256 00:18:57.771 14:23:39 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:18:57.771 14:23:39 -- host/auth.sh@68 -- # keyid=3 00:18:57.771 14:23:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:57.771 14:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.771 14:23:39 -- common/autotest_common.sh@10 -- # set +x 00:18:57.771 14:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.771 14:23:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:57.771 14:23:39 -- nvmf/common.sh@717 -- # local ip 00:18:57.771 14:23:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:57.771 14:23:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:57.771 14:23:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.771 14:23:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.771 14:23:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:57.771 14:23:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.771 14:23:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:57.771 14:23:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:57.771 14:23:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:57.771 14:23:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:18:57.771 14:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.771 14:23:39 -- common/autotest_common.sh@10 -- # set +x 00:18:58.704 nvme0n1 00:18:58.704 14:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.704 14:23:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.704 14:23:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:58.704 14:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.704 14:23:39 -- common/autotest_common.sh@10 -- # set +x 00:18:58.704 14:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.704 14:23:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.704 14:23:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.704 14:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.704 14:23:39 -- common/autotest_common.sh@10 -- # set +x 00:18:58.704 14:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.704 14:23:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:58.704 14:23:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:18:58.704 14:23:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:58.704 14:23:39 -- host/auth.sh@44 -- # digest=sha256 00:18:58.704 14:23:39 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:58.704 14:23:39 -- host/auth.sh@44 -- # keyid=4 00:18:58.704 14:23:39 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:18:58.704 14:23:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:58.704 14:23:39 -- host/auth.sh@48 -- # echo ffdhe6144 00:18:58.704 14:23:39 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:18:58.704 14:23:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:18:58.704 14:23:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:58.704 14:23:39 -- host/auth.sh@68 -- # digest=sha256 00:18:58.704 14:23:39 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:18:58.704 14:23:39 -- host/auth.sh@68 -- # keyid=4 00:18:58.704 14:23:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:58.704 14:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.704 14:23:39 -- common/autotest_common.sh@10 -- # set +x 00:18:58.704 14:23:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.704 14:23:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:58.704 14:23:39 -- nvmf/common.sh@717 -- # local ip 00:18:58.704 14:23:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:58.704 14:23:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:58.704 14:23:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.704 14:23:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.704 14:23:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:58.704 14:23:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.704 14:23:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:58.704 14:23:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:58.704 14:23:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:58.704 14:23:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:58.704 14:23:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.704 14:23:39 -- common/autotest_common.sh@10 -- # set +x 00:18:59.270 nvme0n1 00:18:59.270 14:23:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.270 14:23:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.270 14:23:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.270 14:23:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:18:59.270 14:23:40 -- common/autotest_common.sh@10 -- # set +x 00:18:59.270 14:23:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.270 14:23:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.270 14:23:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.270 14:23:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.270 14:23:40 -- common/autotest_common.sh@10 -- # set +x 00:18:59.270 14:23:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.270 14:23:40 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.270 14:23:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:18:59.270 14:23:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:18:59.270 14:23:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:18:59.270 14:23:40 -- host/auth.sh@44 -- # digest=sha256 00:18:59.270 14:23:40 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:59.270 14:23:40 -- host/auth.sh@44 -- # keyid=0 00:18:59.270 14:23:40 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:18:59.270 14:23:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:18:59.270 14:23:40 -- host/auth.sh@48 -- # echo ffdhe8192 00:18:59.270 14:23:40 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:18:59.270 14:23:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:18:59.270 14:23:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:18:59.270 14:23:40 -- host/auth.sh@68 -- # digest=sha256 00:18:59.270 14:23:40 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:18:59.270 14:23:40 -- host/auth.sh@68 -- # keyid=0 00:18:59.270 14:23:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:59.270 14:23:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.270 14:23:40 -- common/autotest_common.sh@10 -- # set +x 00:18:59.270 14:23:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.270 14:23:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:18:59.270 14:23:40 -- nvmf/common.sh@717 -- # local ip 00:18:59.270 14:23:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:18:59.270 14:23:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:18:59.270 14:23:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.270 14:23:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.270 14:23:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:18:59.270 14:23:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.270 14:23:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:18:59.270 14:23:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:18:59.270 14:23:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:18:59.270 14:23:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:18:59.270 14:23:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.270 14:23:40 -- common/autotest_common.sh@10 -- # set +x 00:19:00.203 nvme0n1 00:19:00.203 14:23:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.203 14:23:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.203 14:23:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:00.203 14:23:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.203 14:23:41 -- common/autotest_common.sh@10 -- # set +x 00:19:00.203 14:23:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.461 14:23:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.461 14:23:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.461 14:23:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.461 14:23:41 -- common/autotest_common.sh@10 -- # set +x 00:19:00.461 14:23:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.461 14:23:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:00.461 14:23:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:00.461 14:23:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:00.461 14:23:41 -- host/auth.sh@44 -- # digest=sha256 00:19:00.461 14:23:41 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:00.461 14:23:41 -- host/auth.sh@44 -- # keyid=1 00:19:00.461 14:23:41 -- host/auth.sh@45 -- # key=DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:00.461 14:23:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:00.461 14:23:41 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:00.461 14:23:41 -- host/auth.sh@49 -- # echo DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:00.461 14:23:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:19:00.461 14:23:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:00.461 14:23:41 -- host/auth.sh@68 -- # digest=sha256 00:19:00.461 14:23:41 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:00.461 14:23:41 -- host/auth.sh@68 -- # keyid=1 00:19:00.461 14:23:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:00.461 14:23:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.461 14:23:41 -- common/autotest_common.sh@10 -- # set +x 00:19:00.461 14:23:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:00.461 14:23:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:00.461 14:23:41 -- nvmf/common.sh@717 -- # local ip 00:19:00.461 14:23:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:00.461 14:23:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:00.461 14:23:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.461 14:23:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.461 14:23:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:00.461 14:23:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.461 14:23:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:00.461 14:23:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:00.461 14:23:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:00.461 14:23:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:00.461 14:23:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:00.461 14:23:41 -- common/autotest_common.sh@10 -- # set +x 00:19:01.395 nvme0n1 00:19:01.395 14:23:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.395 14:23:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:01.395 14:23:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:01.395 14:23:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.395 14:23:42 -- common/autotest_common.sh@10 -- # set +x 00:19:01.395 14:23:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.395 14:23:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.395 14:23:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:01.395 14:23:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.395 14:23:42 -- common/autotest_common.sh@10 -- # set +x 00:19:01.395 14:23:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.395 14:23:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:01.395 14:23:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:01.395 14:23:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:01.395 14:23:42 -- host/auth.sh@44 -- # digest=sha256 00:19:01.395 14:23:42 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:01.395 14:23:42 -- host/auth.sh@44 -- # keyid=2 00:19:01.395 14:23:42 -- host/auth.sh@45 -- # key=DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:01.395 14:23:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:01.395 14:23:42 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:01.395 14:23:42 -- host/auth.sh@49 -- # echo DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:01.395 14:23:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:19:01.395 14:23:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:01.395 14:23:42 -- host/auth.sh@68 -- # digest=sha256 00:19:01.395 14:23:42 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:01.395 14:23:42 -- host/auth.sh@68 -- # keyid=2 00:19:01.395 14:23:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:01.395 14:23:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.395 14:23:42 -- common/autotest_common.sh@10 -- # set +x 00:19:01.395 14:23:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:01.395 14:23:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:01.395 14:23:42 -- nvmf/common.sh@717 -- # local ip 00:19:01.395 14:23:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:01.395 14:23:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:01.395 14:23:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:01.395 14:23:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:01.395 14:23:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:01.395 14:23:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:01.395 14:23:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:01.395 14:23:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:01.395 14:23:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:01.395 14:23:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:01.395 14:23:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:01.395 14:23:42 -- common/autotest_common.sh@10 -- # set +x 00:19:02.768 nvme0n1 00:19:02.768 14:23:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.768 14:23:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:02.768 14:23:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:02.768 14:23:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.768 14:23:44 -- common/autotest_common.sh@10 -- # set +x 00:19:02.768 14:23:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.768 14:23:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.768 14:23:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:02.768 14:23:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.768 14:23:44 -- common/autotest_common.sh@10 -- # set +x 00:19:02.768 14:23:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.768 14:23:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:02.768 14:23:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:02.768 14:23:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:02.768 14:23:44 -- host/auth.sh@44 -- # digest=sha256 00:19:02.768 14:23:44 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:02.768 14:23:44 -- host/auth.sh@44 -- # keyid=3 00:19:02.768 14:23:44 -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:02.769 14:23:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:02.769 14:23:44 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:02.769 14:23:44 -- host/auth.sh@49 -- # echo DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:02.769 14:23:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:19:02.769 14:23:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:02.769 14:23:44 -- host/auth.sh@68 -- # digest=sha256 00:19:02.769 14:23:44 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:02.769 14:23:44 -- host/auth.sh@68 -- # keyid=3 00:19:02.769 14:23:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:02.769 14:23:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.769 14:23:44 -- common/autotest_common.sh@10 -- # set +x 00:19:02.769 14:23:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:02.769 14:23:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:02.769 14:23:44 -- nvmf/common.sh@717 -- # local ip 00:19:02.769 14:23:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:02.769 14:23:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:02.769 14:23:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:02.769 14:23:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:02.769 14:23:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:02.769 14:23:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:02.769 14:23:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:02.769 14:23:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:02.769 14:23:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:02.769 14:23:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:02.769 14:23:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:02.769 14:23:44 -- common/autotest_common.sh@10 -- # set +x 00:19:03.703 nvme0n1 00:19:03.703 14:23:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.703 14:23:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:03.703 14:23:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:03.703 14:23:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.703 14:23:45 -- common/autotest_common.sh@10 -- # set +x 00:19:03.703 14:23:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.703 14:23:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.703 14:23:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:03.703 14:23:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.703 14:23:45 -- common/autotest_common.sh@10 -- # set +x 00:19:03.962 14:23:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.962 14:23:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:03.962 14:23:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:03.962 14:23:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:03.962 14:23:45 -- host/auth.sh@44 -- # digest=sha256 00:19:03.962 14:23:45 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:03.962 14:23:45 -- host/auth.sh@44 -- # keyid=4 00:19:03.962 14:23:45 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:03.962 14:23:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:03.962 14:23:45 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:03.962 14:23:45 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:03.962 14:23:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:19:03.962 14:23:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:03.962 14:23:45 -- host/auth.sh@68 -- # digest=sha256 00:19:03.962 14:23:45 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:03.962 14:23:45 -- host/auth.sh@68 -- # keyid=4 00:19:03.962 14:23:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:03.962 14:23:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.962 14:23:45 -- common/autotest_common.sh@10 -- # set +x 00:19:03.962 14:23:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:03.962 14:23:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:03.962 14:23:45 -- nvmf/common.sh@717 -- # local ip 00:19:03.962 14:23:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:03.962 14:23:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:03.962 14:23:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:03.962 14:23:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:03.962 14:23:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:03.962 14:23:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:03.963 14:23:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:03.963 14:23:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:03.963 14:23:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:03.963 14:23:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:03.963 14:23:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:03.963 14:23:45 -- common/autotest_common.sh@10 -- # set +x 00:19:04.897 nvme0n1 00:19:04.897 14:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.897 14:23:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:04.897 14:23:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:04.897 14:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.897 14:23:46 -- common/autotest_common.sh@10 -- # set +x 00:19:04.897 14:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.897 14:23:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.897 14:23:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:04.897 14:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.897 14:23:46 -- common/autotest_common.sh@10 -- # set +x 00:19:04.897 14:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.897 14:23:46 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:04.897 14:23:46 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.897 14:23:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:04.897 14:23:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:04.897 14:23:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:04.897 14:23:46 -- host/auth.sh@44 -- # digest=sha384 00:19:04.897 14:23:46 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:04.897 14:23:46 -- host/auth.sh@44 -- # keyid=0 00:19:04.897 14:23:46 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:19:04.897 14:23:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:04.897 14:23:46 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:04.897 14:23:46 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:19:04.897 14:23:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:19:04.897 14:23:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:04.897 14:23:46 -- host/auth.sh@68 -- # digest=sha384 00:19:04.897 14:23:46 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:04.897 14:23:46 -- host/auth.sh@68 -- # keyid=0 00:19:04.897 14:23:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:04.897 14:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.897 14:23:46 -- common/autotest_common.sh@10 -- # set +x 00:19:04.897 14:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.897 14:23:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:04.897 14:23:46 -- nvmf/common.sh@717 -- # local ip 00:19:04.897 14:23:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:04.897 14:23:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:04.897 14:23:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:04.897 14:23:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:04.897 14:23:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:04.897 14:23:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:04.897 14:23:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:04.897 14:23:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:04.897 14:23:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:04.897 14:23:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:04.897 14:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.897 14:23:46 -- common/autotest_common.sh@10 -- # set +x 00:19:05.156 nvme0n1 00:19:05.156 14:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.156 14:23:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.156 14:23:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:05.156 14:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.156 14:23:46 -- common/autotest_common.sh@10 -- # set +x 00:19:05.156 14:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.156 14:23:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.156 14:23:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.156 14:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.156 14:23:46 -- common/autotest_common.sh@10 -- # set +x 00:19:05.156 14:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.156 14:23:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:05.156 14:23:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:05.156 14:23:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:05.156 14:23:46 -- host/auth.sh@44 -- # digest=sha384 00:19:05.156 14:23:46 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:05.156 14:23:46 -- host/auth.sh@44 -- # keyid=1 00:19:05.156 14:23:46 -- host/auth.sh@45 -- # key=DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:05.156 14:23:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:05.156 14:23:46 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:05.156 14:23:46 -- host/auth.sh@49 -- # echo DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:05.156 14:23:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:19:05.156 14:23:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:05.156 14:23:46 -- host/auth.sh@68 -- # digest=sha384 00:19:05.156 14:23:46 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:05.156 14:23:46 -- host/auth.sh@68 -- # keyid=1 00:19:05.156 14:23:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:05.156 14:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.156 14:23:46 -- common/autotest_common.sh@10 -- # set +x 00:19:05.156 14:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.156 14:23:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:05.156 14:23:46 -- nvmf/common.sh@717 -- # local ip 00:19:05.156 14:23:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:05.156 14:23:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:05.156 14:23:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.156 14:23:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.156 14:23:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:05.156 14:23:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.156 14:23:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:05.156 14:23:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:05.156 14:23:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:05.156 14:23:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:05.156 14:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.156 14:23:46 -- common/autotest_common.sh@10 -- # set +x 00:19:05.415 nvme0n1 00:19:05.415 14:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.415 14:23:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.415 14:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.415 14:23:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:05.415 14:23:46 -- common/autotest_common.sh@10 -- # set +x 00:19:05.415 14:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.415 14:23:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.415 14:23:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.415 14:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.415 14:23:46 -- common/autotest_common.sh@10 -- # set +x 00:19:05.415 14:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.415 14:23:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:05.415 14:23:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:05.415 14:23:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:05.415 14:23:46 -- host/auth.sh@44 -- # digest=sha384 00:19:05.415 14:23:46 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:05.415 14:23:46 -- host/auth.sh@44 -- # keyid=2 00:19:05.415 14:23:46 -- host/auth.sh@45 -- # key=DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:05.415 14:23:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:05.415 14:23:46 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:05.415 14:23:46 -- host/auth.sh@49 -- # echo DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:05.415 14:23:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:19:05.415 14:23:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:05.415 14:23:46 -- host/auth.sh@68 -- # digest=sha384 00:19:05.415 14:23:46 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:05.415 14:23:46 -- host/auth.sh@68 -- # keyid=2 00:19:05.415 14:23:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:05.415 14:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.415 14:23:46 -- common/autotest_common.sh@10 -- # set +x 00:19:05.415 14:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.415 14:23:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:05.415 14:23:46 -- nvmf/common.sh@717 -- # local ip 00:19:05.415 14:23:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:05.415 14:23:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:05.415 14:23:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.415 14:23:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.415 14:23:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:05.415 14:23:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.415 14:23:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:05.415 14:23:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:05.415 14:23:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:05.415 14:23:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:05.415 14:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.415 14:23:46 -- common/autotest_common.sh@10 -- # set +x 00:19:05.415 nvme0n1 00:19:05.415 14:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.415 14:23:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.415 14:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.415 14:23:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:05.415 14:23:46 -- common/autotest_common.sh@10 -- # set +x 00:19:05.415 14:23:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.674 14:23:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.674 14:23:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.674 14:23:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.674 14:23:46 -- common/autotest_common.sh@10 -- # set +x 00:19:05.674 14:23:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.674 14:23:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:05.674 14:23:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:05.674 14:23:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:05.674 14:23:47 -- host/auth.sh@44 -- # digest=sha384 00:19:05.674 14:23:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:05.674 14:23:47 -- host/auth.sh@44 -- # keyid=3 00:19:05.674 14:23:47 -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:05.674 14:23:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:05.674 14:23:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:05.674 14:23:47 -- host/auth.sh@49 -- # echo DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:05.674 14:23:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:19:05.674 14:23:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:05.674 14:23:47 -- host/auth.sh@68 -- # digest=sha384 00:19:05.674 14:23:47 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:05.674 14:23:47 -- host/auth.sh@68 -- # keyid=3 00:19:05.674 14:23:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:05.674 14:23:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.674 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:19:05.674 14:23:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.674 14:23:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:05.674 14:23:47 -- nvmf/common.sh@717 -- # local ip 00:19:05.674 14:23:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:05.674 14:23:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:05.674 14:23:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.674 14:23:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.674 14:23:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:05.674 14:23:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.674 14:23:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:05.674 14:23:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:05.674 14:23:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:05.674 14:23:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:05.674 14:23:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.674 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:19:05.674 nvme0n1 00:19:05.674 14:23:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.674 14:23:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.674 14:23:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:05.674 14:23:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.674 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:19:05.674 14:23:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.674 14:23:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.674 14:23:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.674 14:23:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.674 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:19:05.674 14:23:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.675 14:23:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:05.675 14:23:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:05.675 14:23:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:05.675 14:23:47 -- host/auth.sh@44 -- # digest=sha384 00:19:05.675 14:23:47 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:05.675 14:23:47 -- host/auth.sh@44 -- # keyid=4 00:19:05.675 14:23:47 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:05.675 14:23:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:05.675 14:23:47 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:05.675 14:23:47 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:05.675 14:23:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:19:05.675 14:23:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:05.675 14:23:47 -- host/auth.sh@68 -- # digest=sha384 00:19:05.675 14:23:47 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:05.675 14:23:47 -- host/auth.sh@68 -- # keyid=4 00:19:05.675 14:23:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:05.675 14:23:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.675 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:19:05.675 14:23:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.675 14:23:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:05.675 14:23:47 -- nvmf/common.sh@717 -- # local ip 00:19:05.675 14:23:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:05.675 14:23:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:05.675 14:23:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.675 14:23:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.675 14:23:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:05.675 14:23:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.675 14:23:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:05.675 14:23:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:05.675 14:23:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:05.675 14:23:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:05.675 14:23:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.675 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:19:05.933 nvme0n1 00:19:05.933 14:23:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.933 14:23:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:05.933 14:23:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.933 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:19:05.933 14:23:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:05.933 14:23:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.933 14:23:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.933 14:23:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:05.933 14:23:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.933 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:19:05.933 14:23:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.933 14:23:47 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.933 14:23:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:05.933 14:23:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:05.933 14:23:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:05.933 14:23:47 -- host/auth.sh@44 -- # digest=sha384 00:19:05.933 14:23:47 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:05.933 14:23:47 -- host/auth.sh@44 -- # keyid=0 00:19:05.933 14:23:47 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:19:05.933 14:23:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:05.933 14:23:47 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:05.933 14:23:47 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:19:05.933 14:23:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:19:05.933 14:23:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:05.933 14:23:47 -- host/auth.sh@68 -- # digest=sha384 00:19:05.933 14:23:47 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:05.933 14:23:47 -- host/auth.sh@68 -- # keyid=0 00:19:05.933 14:23:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:05.933 14:23:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.933 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:19:05.933 14:23:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.933 14:23:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:05.933 14:23:47 -- nvmf/common.sh@717 -- # local ip 00:19:05.933 14:23:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:05.933 14:23:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:05.933 14:23:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:05.933 14:23:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:05.933 14:23:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:05.933 14:23:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:05.933 14:23:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:05.933 14:23:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:05.933 14:23:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:05.933 14:23:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:05.933 14:23:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.933 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:19:06.191 nvme0n1 00:19:06.191 14:23:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.191 14:23:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:06.191 14:23:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.191 14:23:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.191 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:19:06.191 14:23:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.191 14:23:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.191 14:23:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.191 14:23:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.191 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:19:06.191 14:23:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.191 14:23:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:06.191 14:23:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:06.191 14:23:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:06.191 14:23:47 -- host/auth.sh@44 -- # digest=sha384 00:19:06.191 14:23:47 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:06.191 14:23:47 -- host/auth.sh@44 -- # keyid=1 00:19:06.191 14:23:47 -- host/auth.sh@45 -- # key=DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:06.191 14:23:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:06.191 14:23:47 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:06.191 14:23:47 -- host/auth.sh@49 -- # echo DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:06.191 14:23:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:19:06.191 14:23:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:06.191 14:23:47 -- host/auth.sh@68 -- # digest=sha384 00:19:06.191 14:23:47 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:06.191 14:23:47 -- host/auth.sh@68 -- # keyid=1 00:19:06.191 14:23:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:06.191 14:23:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.191 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:19:06.191 14:23:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.191 14:23:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:06.191 14:23:47 -- nvmf/common.sh@717 -- # local ip 00:19:06.191 14:23:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:06.191 14:23:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:06.191 14:23:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.191 14:23:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.191 14:23:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:06.191 14:23:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.191 14:23:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:06.191 14:23:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:06.191 14:23:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:06.191 14:23:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:06.191 14:23:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.191 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:19:06.450 nvme0n1 00:19:06.450 14:23:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.450 14:23:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.450 14:23:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:06.450 14:23:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.450 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:19:06.450 14:23:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.450 14:23:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.450 14:23:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.450 14:23:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.450 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:19:06.450 14:23:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.450 14:23:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:06.450 14:23:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:06.450 14:23:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:06.450 14:23:47 -- host/auth.sh@44 -- # digest=sha384 00:19:06.450 14:23:47 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:06.450 14:23:47 -- host/auth.sh@44 -- # keyid=2 00:19:06.450 14:23:47 -- host/auth.sh@45 -- # key=DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:06.450 14:23:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:06.450 14:23:47 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:06.450 14:23:47 -- host/auth.sh@49 -- # echo DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:06.450 14:23:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:19:06.450 14:23:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:06.450 14:23:47 -- host/auth.sh@68 -- # digest=sha384 00:19:06.450 14:23:47 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:06.450 14:23:47 -- host/auth.sh@68 -- # keyid=2 00:19:06.450 14:23:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:06.450 14:23:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.450 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:19:06.450 14:23:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.450 14:23:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:06.450 14:23:47 -- nvmf/common.sh@717 -- # local ip 00:19:06.450 14:23:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:06.450 14:23:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:06.450 14:23:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.450 14:23:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.450 14:23:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:06.450 14:23:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.450 14:23:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:06.450 14:23:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:06.450 14:23:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:06.450 14:23:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:06.450 14:23:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.450 14:23:47 -- common/autotest_common.sh@10 -- # set +x 00:19:06.709 nvme0n1 00:19:06.709 14:23:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.709 14:23:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.709 14:23:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.709 14:23:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:06.709 14:23:48 -- common/autotest_common.sh@10 -- # set +x 00:19:06.709 14:23:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.709 14:23:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.709 14:23:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.709 14:23:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.709 14:23:48 -- common/autotest_common.sh@10 -- # set +x 00:19:06.709 14:23:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.709 14:23:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:06.709 14:23:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:06.709 14:23:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:06.709 14:23:48 -- host/auth.sh@44 -- # digest=sha384 00:19:06.709 14:23:48 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:06.709 14:23:48 -- host/auth.sh@44 -- # keyid=3 00:19:06.709 14:23:48 -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:06.709 14:23:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:06.709 14:23:48 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:06.709 14:23:48 -- host/auth.sh@49 -- # echo DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:06.709 14:23:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:19:06.709 14:23:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:06.709 14:23:48 -- host/auth.sh@68 -- # digest=sha384 00:19:06.709 14:23:48 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:06.709 14:23:48 -- host/auth.sh@68 -- # keyid=3 00:19:06.709 14:23:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:06.709 14:23:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.709 14:23:48 -- common/autotest_common.sh@10 -- # set +x 00:19:06.709 14:23:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.709 14:23:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:06.709 14:23:48 -- nvmf/common.sh@717 -- # local ip 00:19:06.709 14:23:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:06.709 14:23:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:06.709 14:23:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.709 14:23:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.709 14:23:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:06.709 14:23:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.709 14:23:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:06.709 14:23:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:06.709 14:23:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:06.709 14:23:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:06.709 14:23:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.709 14:23:48 -- common/autotest_common.sh@10 -- # set +x 00:19:06.709 nvme0n1 00:19:06.709 14:23:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.709 14:23:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.709 14:23:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:06.709 14:23:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.709 14:23:48 -- common/autotest_common.sh@10 -- # set +x 00:19:06.969 14:23:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.969 14:23:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.969 14:23:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:06.969 14:23:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.969 14:23:48 -- common/autotest_common.sh@10 -- # set +x 00:19:06.969 14:23:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.969 14:23:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:06.969 14:23:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:06.969 14:23:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:06.969 14:23:48 -- host/auth.sh@44 -- # digest=sha384 00:19:06.969 14:23:48 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:06.969 14:23:48 -- host/auth.sh@44 -- # keyid=4 00:19:06.969 14:23:48 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:06.969 14:23:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:06.969 14:23:48 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:06.969 14:23:48 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:06.969 14:23:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:19:06.969 14:23:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:06.969 14:23:48 -- host/auth.sh@68 -- # digest=sha384 00:19:06.969 14:23:48 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:06.969 14:23:48 -- host/auth.sh@68 -- # keyid=4 00:19:06.969 14:23:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:06.969 14:23:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.969 14:23:48 -- common/autotest_common.sh@10 -- # set +x 00:19:06.969 14:23:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.969 14:23:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:06.969 14:23:48 -- nvmf/common.sh@717 -- # local ip 00:19:06.969 14:23:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:06.969 14:23:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:06.969 14:23:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:06.969 14:23:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:06.969 14:23:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:06.969 14:23:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:06.969 14:23:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:06.969 14:23:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:06.969 14:23:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:06.969 14:23:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:06.969 14:23:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.969 14:23:48 -- common/autotest_common.sh@10 -- # set +x 00:19:06.969 nvme0n1 00:19:06.969 14:23:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.969 14:23:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:06.969 14:23:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.969 14:23:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:06.969 14:23:48 -- common/autotest_common.sh@10 -- # set +x 00:19:06.969 14:23:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.227 14:23:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.227 14:23:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.227 14:23:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.227 14:23:48 -- common/autotest_common.sh@10 -- # set +x 00:19:07.227 14:23:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.227 14:23:48 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.227 14:23:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:07.227 14:23:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:07.227 14:23:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:07.227 14:23:48 -- host/auth.sh@44 -- # digest=sha384 00:19:07.227 14:23:48 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:07.227 14:23:48 -- host/auth.sh@44 -- # keyid=0 00:19:07.227 14:23:48 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:19:07.227 14:23:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:07.227 14:23:48 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:07.227 14:23:48 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:19:07.227 14:23:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:19:07.227 14:23:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:07.227 14:23:48 -- host/auth.sh@68 -- # digest=sha384 00:19:07.227 14:23:48 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:07.227 14:23:48 -- host/auth.sh@68 -- # keyid=0 00:19:07.227 14:23:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:07.227 14:23:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.227 14:23:48 -- common/autotest_common.sh@10 -- # set +x 00:19:07.227 14:23:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.227 14:23:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:07.227 14:23:48 -- nvmf/common.sh@717 -- # local ip 00:19:07.227 14:23:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:07.227 14:23:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:07.227 14:23:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.227 14:23:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.227 14:23:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:07.227 14:23:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.227 14:23:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:07.227 14:23:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:07.227 14:23:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:07.227 14:23:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:07.227 14:23:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.227 14:23:48 -- common/autotest_common.sh@10 -- # set +x 00:19:07.485 nvme0n1 00:19:07.485 14:23:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.485 14:23:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.485 14:23:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:07.485 14:23:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.485 14:23:48 -- common/autotest_common.sh@10 -- # set +x 00:19:07.485 14:23:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.485 14:23:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.485 14:23:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.485 14:23:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.485 14:23:48 -- common/autotest_common.sh@10 -- # set +x 00:19:07.485 14:23:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.485 14:23:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:07.485 14:23:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:07.485 14:23:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:07.485 14:23:48 -- host/auth.sh@44 -- # digest=sha384 00:19:07.485 14:23:48 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:07.485 14:23:48 -- host/auth.sh@44 -- # keyid=1 00:19:07.485 14:23:48 -- host/auth.sh@45 -- # key=DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:07.485 14:23:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:07.485 14:23:48 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:07.485 14:23:48 -- host/auth.sh@49 -- # echo DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:07.485 14:23:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:19:07.485 14:23:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:07.485 14:23:48 -- host/auth.sh@68 -- # digest=sha384 00:19:07.485 14:23:48 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:07.485 14:23:48 -- host/auth.sh@68 -- # keyid=1 00:19:07.485 14:23:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:07.485 14:23:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.485 14:23:48 -- common/autotest_common.sh@10 -- # set +x 00:19:07.485 14:23:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.485 14:23:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:07.485 14:23:48 -- nvmf/common.sh@717 -- # local ip 00:19:07.485 14:23:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:07.485 14:23:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:07.485 14:23:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.485 14:23:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.485 14:23:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:07.485 14:23:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.485 14:23:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:07.485 14:23:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:07.485 14:23:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:07.485 14:23:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:07.485 14:23:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.485 14:23:48 -- common/autotest_common.sh@10 -- # set +x 00:19:07.744 nvme0n1 00:19:07.744 14:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.744 14:23:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:07.744 14:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.744 14:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:07.744 14:23:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:07.744 14:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.744 14:23:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.744 14:23:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:07.744 14:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.744 14:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:07.744 14:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.744 14:23:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:07.744 14:23:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:07.744 14:23:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:07.744 14:23:49 -- host/auth.sh@44 -- # digest=sha384 00:19:07.744 14:23:49 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:07.744 14:23:49 -- host/auth.sh@44 -- # keyid=2 00:19:07.744 14:23:49 -- host/auth.sh@45 -- # key=DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:07.744 14:23:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:07.744 14:23:49 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:07.744 14:23:49 -- host/auth.sh@49 -- # echo DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:07.744 14:23:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:19:07.744 14:23:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:07.744 14:23:49 -- host/auth.sh@68 -- # digest=sha384 00:19:07.744 14:23:49 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:07.744 14:23:49 -- host/auth.sh@68 -- # keyid=2 00:19:07.744 14:23:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:07.744 14:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.744 14:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:07.744 14:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.744 14:23:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:07.744 14:23:49 -- nvmf/common.sh@717 -- # local ip 00:19:07.744 14:23:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:07.744 14:23:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:07.744 14:23:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.744 14:23:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.744 14:23:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:07.744 14:23:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:07.744 14:23:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:07.744 14:23:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:07.744 14:23:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:07.744 14:23:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:07.744 14:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.744 14:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:08.009 nvme0n1 00:19:08.009 14:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.009 14:23:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:08.009 14:23:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.009 14:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.009 14:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:08.009 14:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.296 14:23:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.296 14:23:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.296 14:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.296 14:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:08.296 14:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.296 14:23:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:08.297 14:23:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:08.297 14:23:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:08.297 14:23:49 -- host/auth.sh@44 -- # digest=sha384 00:19:08.297 14:23:49 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:08.297 14:23:49 -- host/auth.sh@44 -- # keyid=3 00:19:08.297 14:23:49 -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:08.297 14:23:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:08.297 14:23:49 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:08.297 14:23:49 -- host/auth.sh@49 -- # echo DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:08.297 14:23:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:19:08.297 14:23:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:08.297 14:23:49 -- host/auth.sh@68 -- # digest=sha384 00:19:08.297 14:23:49 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:08.297 14:23:49 -- host/auth.sh@68 -- # keyid=3 00:19:08.297 14:23:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:08.297 14:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.297 14:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:08.297 14:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.297 14:23:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:08.297 14:23:49 -- nvmf/common.sh@717 -- # local ip 00:19:08.297 14:23:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:08.297 14:23:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:08.297 14:23:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.297 14:23:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.297 14:23:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:08.297 14:23:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.297 14:23:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:08.297 14:23:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:08.297 14:23:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:08.297 14:23:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:08.297 14:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.297 14:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:08.578 nvme0n1 00:19:08.578 14:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.578 14:23:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.578 14:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.578 14:23:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:08.578 14:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:08.578 14:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.578 14:23:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.578 14:23:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.578 14:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.578 14:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:08.578 14:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.578 14:23:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:08.579 14:23:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:08.579 14:23:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:08.579 14:23:49 -- host/auth.sh@44 -- # digest=sha384 00:19:08.579 14:23:49 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:08.579 14:23:49 -- host/auth.sh@44 -- # keyid=4 00:19:08.579 14:23:49 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:08.579 14:23:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:08.579 14:23:49 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:08.579 14:23:49 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:08.579 14:23:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:19:08.579 14:23:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:08.579 14:23:49 -- host/auth.sh@68 -- # digest=sha384 00:19:08.579 14:23:49 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:08.579 14:23:49 -- host/auth.sh@68 -- # keyid=4 00:19:08.579 14:23:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:08.579 14:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.579 14:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:08.579 14:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.579 14:23:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:08.579 14:23:49 -- nvmf/common.sh@717 -- # local ip 00:19:08.579 14:23:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:08.579 14:23:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:08.579 14:23:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.579 14:23:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.579 14:23:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:08.579 14:23:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.579 14:23:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:08.579 14:23:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:08.579 14:23:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:08.579 14:23:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:08.579 14:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.579 14:23:49 -- common/autotest_common.sh@10 -- # set +x 00:19:08.837 nvme0n1 00:19:08.837 14:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.837 14:23:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:08.837 14:23:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:08.837 14:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.837 14:23:50 -- common/autotest_common.sh@10 -- # set +x 00:19:08.837 14:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.837 14:23:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.837 14:23:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:08.837 14:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.837 14:23:50 -- common/autotest_common.sh@10 -- # set +x 00:19:08.837 14:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.837 14:23:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.837 14:23:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:08.837 14:23:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:08.837 14:23:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:08.837 14:23:50 -- host/auth.sh@44 -- # digest=sha384 00:19:08.837 14:23:50 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:08.837 14:23:50 -- host/auth.sh@44 -- # keyid=0 00:19:08.837 14:23:50 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:19:08.837 14:23:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:08.837 14:23:50 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:08.837 14:23:50 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:19:08.837 14:23:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:19:08.837 14:23:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:08.837 14:23:50 -- host/auth.sh@68 -- # digest=sha384 00:19:08.837 14:23:50 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:08.837 14:23:50 -- host/auth.sh@68 -- # keyid=0 00:19:08.837 14:23:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:08.837 14:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.837 14:23:50 -- common/autotest_common.sh@10 -- # set +x 00:19:08.837 14:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:08.837 14:23:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:08.837 14:23:50 -- nvmf/common.sh@717 -- # local ip 00:19:08.837 14:23:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:08.837 14:23:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:08.837 14:23:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:08.837 14:23:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:08.837 14:23:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:08.837 14:23:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:08.837 14:23:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:08.837 14:23:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:08.837 14:23:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:08.837 14:23:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:08.837 14:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:08.837 14:23:50 -- common/autotest_common.sh@10 -- # set +x 00:19:09.404 nvme0n1 00:19:09.404 14:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.404 14:23:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:09.404 14:23:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:09.404 14:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.404 14:23:50 -- common/autotest_common.sh@10 -- # set +x 00:19:09.404 14:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.404 14:23:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.404 14:23:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:09.404 14:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.404 14:23:50 -- common/autotest_common.sh@10 -- # set +x 00:19:09.404 14:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.404 14:23:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:09.404 14:23:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:09.404 14:23:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:09.404 14:23:50 -- host/auth.sh@44 -- # digest=sha384 00:19:09.404 14:23:50 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:09.404 14:23:50 -- host/auth.sh@44 -- # keyid=1 00:19:09.404 14:23:50 -- host/auth.sh@45 -- # key=DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:09.404 14:23:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:09.404 14:23:50 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:09.404 14:23:50 -- host/auth.sh@49 -- # echo DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:09.404 14:23:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:19:09.404 14:23:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:09.404 14:23:50 -- host/auth.sh@68 -- # digest=sha384 00:19:09.404 14:23:50 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:09.404 14:23:50 -- host/auth.sh@68 -- # keyid=1 00:19:09.404 14:23:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:09.404 14:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.404 14:23:50 -- common/autotest_common.sh@10 -- # set +x 00:19:09.404 14:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.404 14:23:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:09.404 14:23:50 -- nvmf/common.sh@717 -- # local ip 00:19:09.404 14:23:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:09.404 14:23:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:09.404 14:23:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:09.404 14:23:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:09.404 14:23:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:09.404 14:23:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:09.405 14:23:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:09.405 14:23:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:09.405 14:23:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:09.405 14:23:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:09.405 14:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.405 14:23:50 -- common/autotest_common.sh@10 -- # set +x 00:19:09.971 nvme0n1 00:19:09.971 14:23:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:09.971 14:23:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:09.971 14:23:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:09.971 14:23:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:09.971 14:23:51 -- common/autotest_common.sh@10 -- # set +x 00:19:09.971 14:23:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.229 14:23:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.229 14:23:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.229 14:23:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.229 14:23:51 -- common/autotest_common.sh@10 -- # set +x 00:19:10.229 14:23:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.229 14:23:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:10.229 14:23:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:10.229 14:23:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:10.229 14:23:51 -- host/auth.sh@44 -- # digest=sha384 00:19:10.229 14:23:51 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:10.229 14:23:51 -- host/auth.sh@44 -- # keyid=2 00:19:10.229 14:23:51 -- host/auth.sh@45 -- # key=DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:10.229 14:23:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:10.229 14:23:51 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:10.229 14:23:51 -- host/auth.sh@49 -- # echo DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:10.229 14:23:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:19:10.229 14:23:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:10.229 14:23:51 -- host/auth.sh@68 -- # digest=sha384 00:19:10.229 14:23:51 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:10.229 14:23:51 -- host/auth.sh@68 -- # keyid=2 00:19:10.229 14:23:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:10.229 14:23:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.229 14:23:51 -- common/autotest_common.sh@10 -- # set +x 00:19:10.229 14:23:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.229 14:23:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:10.229 14:23:51 -- nvmf/common.sh@717 -- # local ip 00:19:10.229 14:23:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:10.229 14:23:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:10.229 14:23:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.229 14:23:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.229 14:23:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:10.229 14:23:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.229 14:23:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:10.229 14:23:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:10.229 14:23:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:10.229 14:23:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:10.229 14:23:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.229 14:23:51 -- common/autotest_common.sh@10 -- # set +x 00:19:10.795 nvme0n1 00:19:10.795 14:23:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.795 14:23:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:10.795 14:23:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:10.795 14:23:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.795 14:23:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.795 14:23:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.795 14:23:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.795 14:23:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:10.795 14:23:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.795 14:23:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.795 14:23:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.795 14:23:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:10.795 14:23:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:10.795 14:23:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:10.795 14:23:52 -- host/auth.sh@44 -- # digest=sha384 00:19:10.795 14:23:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:10.795 14:23:52 -- host/auth.sh@44 -- # keyid=3 00:19:10.795 14:23:52 -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:10.795 14:23:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:10.795 14:23:52 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:10.795 14:23:52 -- host/auth.sh@49 -- # echo DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:10.795 14:23:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:19:10.795 14:23:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:10.795 14:23:52 -- host/auth.sh@68 -- # digest=sha384 00:19:10.795 14:23:52 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:10.795 14:23:52 -- host/auth.sh@68 -- # keyid=3 00:19:10.795 14:23:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:10.795 14:23:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.795 14:23:52 -- common/autotest_common.sh@10 -- # set +x 00:19:10.795 14:23:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.795 14:23:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:10.795 14:23:52 -- nvmf/common.sh@717 -- # local ip 00:19:10.795 14:23:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:10.795 14:23:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:10.795 14:23:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:10.795 14:23:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:10.795 14:23:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:10.795 14:23:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:10.795 14:23:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:10.795 14:23:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:10.795 14:23:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:10.795 14:23:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:10.795 14:23:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.795 14:23:52 -- common/autotest_common.sh@10 -- # set +x 00:19:11.361 nvme0n1 00:19:11.361 14:23:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.361 14:23:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.361 14:23:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.361 14:23:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:11.361 14:23:52 -- common/autotest_common.sh@10 -- # set +x 00:19:11.361 14:23:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.361 14:23:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.361 14:23:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.361 14:23:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.361 14:23:52 -- common/autotest_common.sh@10 -- # set +x 00:19:11.361 14:23:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.361 14:23:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:11.361 14:23:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:11.361 14:23:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:11.361 14:23:52 -- host/auth.sh@44 -- # digest=sha384 00:19:11.361 14:23:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:11.361 14:23:52 -- host/auth.sh@44 -- # keyid=4 00:19:11.361 14:23:52 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:11.361 14:23:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:11.361 14:23:52 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:11.361 14:23:52 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:11.361 14:23:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:19:11.361 14:23:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:11.361 14:23:52 -- host/auth.sh@68 -- # digest=sha384 00:19:11.361 14:23:52 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:11.361 14:23:52 -- host/auth.sh@68 -- # keyid=4 00:19:11.361 14:23:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:11.361 14:23:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.361 14:23:52 -- common/autotest_common.sh@10 -- # set +x 00:19:11.362 14:23:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.362 14:23:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:11.362 14:23:52 -- nvmf/common.sh@717 -- # local ip 00:19:11.362 14:23:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:11.362 14:23:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:11.362 14:23:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:11.362 14:23:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:11.362 14:23:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:11.362 14:23:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:11.362 14:23:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:11.362 14:23:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:11.362 14:23:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:11.362 14:23:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:11.362 14:23:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.362 14:23:52 -- common/autotest_common.sh@10 -- # set +x 00:19:11.926 nvme0n1 00:19:11.926 14:23:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.926 14:23:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:11.926 14:23:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:11.926 14:23:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.926 14:23:53 -- common/autotest_common.sh@10 -- # set +x 00:19:11.926 14:23:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:11.926 14:23:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.926 14:23:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:11.926 14:23:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:11.926 14:23:53 -- common/autotest_common.sh@10 -- # set +x 00:19:12.184 14:23:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.184 14:23:53 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.184 14:23:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:12.184 14:23:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:12.184 14:23:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:12.184 14:23:53 -- host/auth.sh@44 -- # digest=sha384 00:19:12.184 14:23:53 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:12.184 14:23:53 -- host/auth.sh@44 -- # keyid=0 00:19:12.184 14:23:53 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:19:12.184 14:23:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:12.184 14:23:53 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:12.184 14:23:53 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:19:12.184 14:23:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:19:12.184 14:23:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:12.184 14:23:53 -- host/auth.sh@68 -- # digest=sha384 00:19:12.184 14:23:53 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:12.184 14:23:53 -- host/auth.sh@68 -- # keyid=0 00:19:12.184 14:23:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:12.184 14:23:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.184 14:23:53 -- common/autotest_common.sh@10 -- # set +x 00:19:12.184 14:23:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:12.184 14:23:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:12.184 14:23:53 -- nvmf/common.sh@717 -- # local ip 00:19:12.184 14:23:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:12.184 14:23:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:12.184 14:23:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:12.184 14:23:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:12.184 14:23:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:12.184 14:23:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:12.184 14:23:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:12.184 14:23:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:12.184 14:23:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:12.184 14:23:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:12.184 14:23:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:12.184 14:23:53 -- common/autotest_common.sh@10 -- # set +x 00:19:13.117 nvme0n1 00:19:13.117 14:23:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.117 14:23:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:13.117 14:23:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.117 14:23:54 -- common/autotest_common.sh@10 -- # set +x 00:19:13.117 14:23:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:13.374 14:23:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.374 14:23:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.374 14:23:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:13.374 14:23:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.374 14:23:54 -- common/autotest_common.sh@10 -- # set +x 00:19:13.374 14:23:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.374 14:23:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:13.374 14:23:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:13.374 14:23:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:13.374 14:23:54 -- host/auth.sh@44 -- # digest=sha384 00:19:13.374 14:23:54 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:13.374 14:23:54 -- host/auth.sh@44 -- # keyid=1 00:19:13.374 14:23:54 -- host/auth.sh@45 -- # key=DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:13.374 14:23:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:13.374 14:23:54 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:13.374 14:23:54 -- host/auth.sh@49 -- # echo DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:13.374 14:23:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:19:13.374 14:23:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:13.374 14:23:54 -- host/auth.sh@68 -- # digest=sha384 00:19:13.374 14:23:54 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:13.374 14:23:54 -- host/auth.sh@68 -- # keyid=1 00:19:13.374 14:23:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:13.374 14:23:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.374 14:23:54 -- common/autotest_common.sh@10 -- # set +x 00:19:13.374 14:23:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:13.374 14:23:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:13.374 14:23:54 -- nvmf/common.sh@717 -- # local ip 00:19:13.374 14:23:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:13.375 14:23:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:13.375 14:23:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:13.375 14:23:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:13.375 14:23:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:13.375 14:23:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:13.375 14:23:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:13.375 14:23:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:13.375 14:23:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:13.375 14:23:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:13.375 14:23:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:13.375 14:23:54 -- common/autotest_common.sh@10 -- # set +x 00:19:14.306 nvme0n1 00:19:14.306 14:23:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.306 14:23:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.306 14:23:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:14.306 14:23:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.306 14:23:55 -- common/autotest_common.sh@10 -- # set +x 00:19:14.306 14:23:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.306 14:23:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.306 14:23:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.306 14:23:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.306 14:23:55 -- common/autotest_common.sh@10 -- # set +x 00:19:14.564 14:23:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.564 14:23:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:14.564 14:23:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:14.564 14:23:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:14.564 14:23:55 -- host/auth.sh@44 -- # digest=sha384 00:19:14.564 14:23:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:14.564 14:23:55 -- host/auth.sh@44 -- # keyid=2 00:19:14.564 14:23:55 -- host/auth.sh@45 -- # key=DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:14.564 14:23:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:14.564 14:23:55 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:14.564 14:23:55 -- host/auth.sh@49 -- # echo DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:14.564 14:23:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:19:14.564 14:23:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:14.564 14:23:55 -- host/auth.sh@68 -- # digest=sha384 00:19:14.564 14:23:55 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:14.564 14:23:55 -- host/auth.sh@68 -- # keyid=2 00:19:14.564 14:23:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:14.564 14:23:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.564 14:23:55 -- common/autotest_common.sh@10 -- # set +x 00:19:14.564 14:23:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:14.564 14:23:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:14.564 14:23:55 -- nvmf/common.sh@717 -- # local ip 00:19:14.564 14:23:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:14.564 14:23:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:14.564 14:23:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:14.564 14:23:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:14.564 14:23:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:14.564 14:23:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:14.564 14:23:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:14.564 14:23:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:14.564 14:23:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:14.564 14:23:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:14.564 14:23:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:14.564 14:23:55 -- common/autotest_common.sh@10 -- # set +x 00:19:15.496 nvme0n1 00:19:15.496 14:23:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.496 14:23:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.496 14:23:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:15.496 14:23:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.496 14:23:56 -- common/autotest_common.sh@10 -- # set +x 00:19:15.496 14:23:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.496 14:23:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.496 14:23:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.496 14:23:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.496 14:23:57 -- common/autotest_common.sh@10 -- # set +x 00:19:15.496 14:23:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.496 14:23:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:15.496 14:23:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:15.496 14:23:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:15.496 14:23:57 -- host/auth.sh@44 -- # digest=sha384 00:19:15.496 14:23:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:15.496 14:23:57 -- host/auth.sh@44 -- # keyid=3 00:19:15.496 14:23:57 -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:15.496 14:23:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:15.496 14:23:57 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:15.496 14:23:57 -- host/auth.sh@49 -- # echo DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:15.496 14:23:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:19:15.496 14:23:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:15.496 14:23:57 -- host/auth.sh@68 -- # digest=sha384 00:19:15.496 14:23:57 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:15.496 14:23:57 -- host/auth.sh@68 -- # keyid=3 00:19:15.496 14:23:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:15.496 14:23:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.496 14:23:57 -- common/autotest_common.sh@10 -- # set +x 00:19:15.496 14:23:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.496 14:23:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:15.496 14:23:57 -- nvmf/common.sh@717 -- # local ip 00:19:15.496 14:23:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:15.496 14:23:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:15.497 14:23:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:15.497 14:23:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:15.497 14:23:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:15.497 14:23:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:15.497 14:23:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:15.497 14:23:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:15.497 14:23:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:15.497 14:23:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:15.497 14:23:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.497 14:23:57 -- common/autotest_common.sh@10 -- # set +x 00:19:16.871 nvme0n1 00:19:16.871 14:23:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.871 14:23:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:16.871 14:23:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.871 14:23:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.871 14:23:58 -- common/autotest_common.sh@10 -- # set +x 00:19:16.871 14:23:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.871 14:23:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.871 14:23:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.871 14:23:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.871 14:23:58 -- common/autotest_common.sh@10 -- # set +x 00:19:16.871 14:23:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.871 14:23:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:16.871 14:23:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:16.871 14:23:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:16.871 14:23:58 -- host/auth.sh@44 -- # digest=sha384 00:19:16.871 14:23:58 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:16.871 14:23:58 -- host/auth.sh@44 -- # keyid=4 00:19:16.871 14:23:58 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:16.871 14:23:58 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:19:16.871 14:23:58 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:16.871 14:23:58 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:16.871 14:23:58 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:19:16.871 14:23:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:16.871 14:23:58 -- host/auth.sh@68 -- # digest=sha384 00:19:16.871 14:23:58 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:16.871 14:23:58 -- host/auth.sh@68 -- # keyid=4 00:19:16.871 14:23:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:16.871 14:23:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.871 14:23:58 -- common/autotest_common.sh@10 -- # set +x 00:19:16.871 14:23:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:16.871 14:23:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:16.871 14:23:58 -- nvmf/common.sh@717 -- # local ip 00:19:16.871 14:23:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:16.871 14:23:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:16.871 14:23:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:16.871 14:23:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:16.871 14:23:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:16.871 14:23:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:16.871 14:23:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:16.871 14:23:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:16.871 14:23:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:16.871 14:23:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:16.871 14:23:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:16.871 14:23:58 -- common/autotest_common.sh@10 -- # set +x 00:19:17.806 nvme0n1 00:19:17.806 14:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:17.806 14:23:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.806 14:23:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:17.806 14:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:17.806 14:23:59 -- common/autotest_common.sh@10 -- # set +x 00:19:17.806 14:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.065 14:23:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.065 14:23:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.065 14:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.065 14:23:59 -- common/autotest_common.sh@10 -- # set +x 00:19:18.065 14:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.065 14:23:59 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:19:18.065 14:23:59 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:18.065 14:23:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:18.065 14:23:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:18.065 14:23:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:18.065 14:23:59 -- host/auth.sh@44 -- # digest=sha512 00:19:18.065 14:23:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:18.065 14:23:59 -- host/auth.sh@44 -- # keyid=0 00:19:18.065 14:23:59 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:19:18.065 14:23:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:18.065 14:23:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:18.065 14:23:59 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:19:18.065 14:23:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:19:18.065 14:23:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:18.065 14:23:59 -- host/auth.sh@68 -- # digest=sha512 00:19:18.065 14:23:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:18.065 14:23:59 -- host/auth.sh@68 -- # keyid=0 00:19:18.065 14:23:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:18.065 14:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.065 14:23:59 -- common/autotest_common.sh@10 -- # set +x 00:19:18.065 14:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.065 14:23:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:18.065 14:23:59 -- nvmf/common.sh@717 -- # local ip 00:19:18.065 14:23:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:18.065 14:23:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:18.065 14:23:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.065 14:23:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.065 14:23:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:18.065 14:23:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.065 14:23:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:18.065 14:23:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:18.065 14:23:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:18.065 14:23:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:18.065 14:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.065 14:23:59 -- common/autotest_common.sh@10 -- # set +x 00:19:18.065 nvme0n1 00:19:18.065 14:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.065 14:23:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.065 14:23:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:18.065 14:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.065 14:23:59 -- common/autotest_common.sh@10 -- # set +x 00:19:18.065 14:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.065 14:23:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.065 14:23:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.065 14:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.065 14:23:59 -- common/autotest_common.sh@10 -- # set +x 00:19:18.065 14:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.065 14:23:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:18.065 14:23:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:18.065 14:23:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:18.065 14:23:59 -- host/auth.sh@44 -- # digest=sha512 00:19:18.065 14:23:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:18.065 14:23:59 -- host/auth.sh@44 -- # keyid=1 00:19:18.065 14:23:59 -- host/auth.sh@45 -- # key=DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:18.065 14:23:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:18.065 14:23:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:18.065 14:23:59 -- host/auth.sh@49 -- # echo DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:18.065 14:23:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:19:18.065 14:23:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:18.065 14:23:59 -- host/auth.sh@68 -- # digest=sha512 00:19:18.065 14:23:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:18.065 14:23:59 -- host/auth.sh@68 -- # keyid=1 00:19:18.065 14:23:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:18.065 14:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.065 14:23:59 -- common/autotest_common.sh@10 -- # set +x 00:19:18.065 14:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.065 14:23:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:18.065 14:23:59 -- nvmf/common.sh@717 -- # local ip 00:19:18.065 14:23:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:18.065 14:23:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:18.065 14:23:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.065 14:23:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.065 14:23:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:18.065 14:23:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.065 14:23:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:18.065 14:23:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:18.065 14:23:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:18.065 14:23:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:18.065 14:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.065 14:23:59 -- common/autotest_common.sh@10 -- # set +x 00:19:18.324 nvme0n1 00:19:18.324 14:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.324 14:23:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.324 14:23:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:18.324 14:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.324 14:23:59 -- common/autotest_common.sh@10 -- # set +x 00:19:18.324 14:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.324 14:23:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.324 14:23:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.324 14:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.324 14:23:59 -- common/autotest_common.sh@10 -- # set +x 00:19:18.324 14:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.324 14:23:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:18.324 14:23:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:18.324 14:23:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:18.324 14:23:59 -- host/auth.sh@44 -- # digest=sha512 00:19:18.324 14:23:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:18.324 14:23:59 -- host/auth.sh@44 -- # keyid=2 00:19:18.324 14:23:59 -- host/auth.sh@45 -- # key=DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:18.324 14:23:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:18.324 14:23:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:18.324 14:23:59 -- host/auth.sh@49 -- # echo DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:18.324 14:23:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:19:18.324 14:23:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:18.324 14:23:59 -- host/auth.sh@68 -- # digest=sha512 00:19:18.324 14:23:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:18.324 14:23:59 -- host/auth.sh@68 -- # keyid=2 00:19:18.324 14:23:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:18.324 14:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.324 14:23:59 -- common/autotest_common.sh@10 -- # set +x 00:19:18.324 14:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.324 14:23:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:18.324 14:23:59 -- nvmf/common.sh@717 -- # local ip 00:19:18.324 14:23:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:18.324 14:23:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:18.324 14:23:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.324 14:23:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.324 14:23:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:18.324 14:23:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.324 14:23:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:18.324 14:23:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:18.324 14:23:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:18.324 14:23:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:18.324 14:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.324 14:23:59 -- common/autotest_common.sh@10 -- # set +x 00:19:18.583 nvme0n1 00:19:18.583 14:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.583 14:23:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:18.583 14:23:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.583 14:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.583 14:23:59 -- common/autotest_common.sh@10 -- # set +x 00:19:18.583 14:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.583 14:23:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.583 14:23:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.583 14:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.583 14:23:59 -- common/autotest_common.sh@10 -- # set +x 00:19:18.583 14:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.583 14:23:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:18.583 14:23:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:18.583 14:23:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:18.583 14:23:59 -- host/auth.sh@44 -- # digest=sha512 00:19:18.583 14:23:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:18.583 14:23:59 -- host/auth.sh@44 -- # keyid=3 00:19:18.583 14:23:59 -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:18.583 14:23:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:18.583 14:23:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:18.583 14:23:59 -- host/auth.sh@49 -- # echo DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:18.583 14:23:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:19:18.583 14:23:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:18.583 14:23:59 -- host/auth.sh@68 -- # digest=sha512 00:19:18.583 14:23:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:18.583 14:23:59 -- host/auth.sh@68 -- # keyid=3 00:19:18.583 14:23:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:18.583 14:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.583 14:23:59 -- common/autotest_common.sh@10 -- # set +x 00:19:18.583 14:23:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.583 14:23:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:18.583 14:23:59 -- nvmf/common.sh@717 -- # local ip 00:19:18.583 14:23:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:18.583 14:23:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:18.583 14:23:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.583 14:23:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.583 14:23:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:18.583 14:23:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.583 14:23:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:18.583 14:23:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:18.583 14:23:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:18.583 14:23:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:18.583 14:23:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.583 14:23:59 -- common/autotest_common.sh@10 -- # set +x 00:19:18.583 nvme0n1 00:19:18.583 14:24:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.583 14:24:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:18.583 14:24:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.583 14:24:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.583 14:24:00 -- common/autotest_common.sh@10 -- # set +x 00:19:18.583 14:24:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.842 14:24:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.842 14:24:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.842 14:24:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.842 14:24:00 -- common/autotest_common.sh@10 -- # set +x 00:19:18.842 14:24:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.842 14:24:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:18.842 14:24:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:18.842 14:24:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:18.842 14:24:00 -- host/auth.sh@44 -- # digest=sha512 00:19:18.842 14:24:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:18.842 14:24:00 -- host/auth.sh@44 -- # keyid=4 00:19:18.842 14:24:00 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:18.842 14:24:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:18.842 14:24:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:18.842 14:24:00 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:18.842 14:24:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:19:18.842 14:24:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:18.842 14:24:00 -- host/auth.sh@68 -- # digest=sha512 00:19:18.842 14:24:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:19:18.842 14:24:00 -- host/auth.sh@68 -- # keyid=4 00:19:18.842 14:24:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:18.842 14:24:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.842 14:24:00 -- common/autotest_common.sh@10 -- # set +x 00:19:18.842 14:24:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.842 14:24:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:18.842 14:24:00 -- nvmf/common.sh@717 -- # local ip 00:19:18.842 14:24:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:18.842 14:24:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:18.842 14:24:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.842 14:24:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.842 14:24:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:18.842 14:24:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.842 14:24:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:18.842 14:24:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:18.842 14:24:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:18.842 14:24:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:18.842 14:24:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.842 14:24:00 -- common/autotest_common.sh@10 -- # set +x 00:19:18.842 nvme0n1 00:19:18.842 14:24:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.842 14:24:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.842 14:24:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.842 14:24:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:18.842 14:24:00 -- common/autotest_common.sh@10 -- # set +x 00:19:18.842 14:24:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.842 14:24:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.842 14:24:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.842 14:24:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.842 14:24:00 -- common/autotest_common.sh@10 -- # set +x 00:19:18.842 14:24:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.842 14:24:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:18.842 14:24:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:18.842 14:24:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:18.842 14:24:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:18.842 14:24:00 -- host/auth.sh@44 -- # digest=sha512 00:19:18.842 14:24:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:18.842 14:24:00 -- host/auth.sh@44 -- # keyid=0 00:19:18.842 14:24:00 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:19:18.842 14:24:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:18.842 14:24:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:18.842 14:24:00 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:19:18.842 14:24:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:19:18.842 14:24:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:18.842 14:24:00 -- host/auth.sh@68 -- # digest=sha512 00:19:18.842 14:24:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:18.842 14:24:00 -- host/auth.sh@68 -- # keyid=0 00:19:18.842 14:24:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:18.842 14:24:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.842 14:24:00 -- common/autotest_common.sh@10 -- # set +x 00:19:18.842 14:24:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.842 14:24:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:18.842 14:24:00 -- nvmf/common.sh@717 -- # local ip 00:19:18.842 14:24:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:18.842 14:24:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:18.842 14:24:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:18.842 14:24:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:18.842 14:24:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:18.842 14:24:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:18.843 14:24:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:18.843 14:24:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:18.843 14:24:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:18.843 14:24:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:18.843 14:24:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.843 14:24:00 -- common/autotest_common.sh@10 -- # set +x 00:19:19.101 nvme0n1 00:19:19.101 14:24:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.101 14:24:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.101 14:24:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.101 14:24:00 -- common/autotest_common.sh@10 -- # set +x 00:19:19.101 14:24:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:19.101 14:24:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.101 14:24:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.101 14:24:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.101 14:24:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.101 14:24:00 -- common/autotest_common.sh@10 -- # set +x 00:19:19.101 14:24:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.101 14:24:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:19.101 14:24:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:19.101 14:24:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:19.101 14:24:00 -- host/auth.sh@44 -- # digest=sha512 00:19:19.101 14:24:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:19.101 14:24:00 -- host/auth.sh@44 -- # keyid=1 00:19:19.101 14:24:00 -- host/auth.sh@45 -- # key=DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:19.101 14:24:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:19.102 14:24:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:19.102 14:24:00 -- host/auth.sh@49 -- # echo DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:19.102 14:24:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:19:19.102 14:24:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:19.102 14:24:00 -- host/auth.sh@68 -- # digest=sha512 00:19:19.102 14:24:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:19.102 14:24:00 -- host/auth.sh@68 -- # keyid=1 00:19:19.102 14:24:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.102 14:24:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.102 14:24:00 -- common/autotest_common.sh@10 -- # set +x 00:19:19.102 14:24:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.102 14:24:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:19.102 14:24:00 -- nvmf/common.sh@717 -- # local ip 00:19:19.102 14:24:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:19.102 14:24:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:19.102 14:24:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.102 14:24:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.102 14:24:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:19.102 14:24:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.102 14:24:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:19.102 14:24:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:19.102 14:24:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:19.102 14:24:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:19.102 14:24:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.102 14:24:00 -- common/autotest_common.sh@10 -- # set +x 00:19:19.360 nvme0n1 00:19:19.360 14:24:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.360 14:24:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.360 14:24:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.360 14:24:00 -- common/autotest_common.sh@10 -- # set +x 00:19:19.360 14:24:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:19.360 14:24:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.360 14:24:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.360 14:24:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.360 14:24:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.360 14:24:00 -- common/autotest_common.sh@10 -- # set +x 00:19:19.360 14:24:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.360 14:24:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:19.360 14:24:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:19.360 14:24:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:19.360 14:24:00 -- host/auth.sh@44 -- # digest=sha512 00:19:19.360 14:24:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:19.360 14:24:00 -- host/auth.sh@44 -- # keyid=2 00:19:19.360 14:24:00 -- host/auth.sh@45 -- # key=DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:19.360 14:24:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:19.360 14:24:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:19.360 14:24:00 -- host/auth.sh@49 -- # echo DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:19.360 14:24:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:19:19.360 14:24:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:19.360 14:24:00 -- host/auth.sh@68 -- # digest=sha512 00:19:19.360 14:24:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:19.360 14:24:00 -- host/auth.sh@68 -- # keyid=2 00:19:19.360 14:24:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.360 14:24:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.360 14:24:00 -- common/autotest_common.sh@10 -- # set +x 00:19:19.360 14:24:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.360 14:24:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:19.360 14:24:00 -- nvmf/common.sh@717 -- # local ip 00:19:19.360 14:24:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:19.360 14:24:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:19.360 14:24:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.361 14:24:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.361 14:24:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:19.361 14:24:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.361 14:24:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:19.361 14:24:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:19.361 14:24:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:19.361 14:24:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:19.361 14:24:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.361 14:24:00 -- common/autotest_common.sh@10 -- # set +x 00:19:19.619 nvme0n1 00:19:19.619 14:24:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.619 14:24:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:19.619 14:24:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.619 14:24:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.619 14:24:00 -- common/autotest_common.sh@10 -- # set +x 00:19:19.619 14:24:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.619 14:24:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.619 14:24:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.619 14:24:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.619 14:24:01 -- common/autotest_common.sh@10 -- # set +x 00:19:19.619 14:24:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.619 14:24:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:19.619 14:24:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:19.619 14:24:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:19.619 14:24:01 -- host/auth.sh@44 -- # digest=sha512 00:19:19.619 14:24:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:19.619 14:24:01 -- host/auth.sh@44 -- # keyid=3 00:19:19.619 14:24:01 -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:19.619 14:24:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:19.619 14:24:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:19.619 14:24:01 -- host/auth.sh@49 -- # echo DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:19.619 14:24:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:19:19.619 14:24:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:19.619 14:24:01 -- host/auth.sh@68 -- # digest=sha512 00:19:19.619 14:24:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:19.619 14:24:01 -- host/auth.sh@68 -- # keyid=3 00:19:19.619 14:24:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.619 14:24:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.619 14:24:01 -- common/autotest_common.sh@10 -- # set +x 00:19:19.619 14:24:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.619 14:24:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:19.619 14:24:01 -- nvmf/common.sh@717 -- # local ip 00:19:19.619 14:24:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:19.619 14:24:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:19.619 14:24:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.619 14:24:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.619 14:24:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:19.619 14:24:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.619 14:24:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:19.619 14:24:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:19.619 14:24:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:19.619 14:24:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:19.619 14:24:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.619 14:24:01 -- common/autotest_common.sh@10 -- # set +x 00:19:19.878 nvme0n1 00:19:19.878 14:24:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.878 14:24:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:19.878 14:24:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.878 14:24:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:19.878 14:24:01 -- common/autotest_common.sh@10 -- # set +x 00:19:19.878 14:24:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.878 14:24:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.878 14:24:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.878 14:24:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.878 14:24:01 -- common/autotest_common.sh@10 -- # set +x 00:19:19.878 14:24:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.878 14:24:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:19.878 14:24:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:19.878 14:24:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:19.878 14:24:01 -- host/auth.sh@44 -- # digest=sha512 00:19:19.878 14:24:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:19.878 14:24:01 -- host/auth.sh@44 -- # keyid=4 00:19:19.878 14:24:01 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:19.878 14:24:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:19.878 14:24:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:19:19.878 14:24:01 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:19.878 14:24:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:19:19.878 14:24:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:19.878 14:24:01 -- host/auth.sh@68 -- # digest=sha512 00:19:19.878 14:24:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:19:19.878 14:24:01 -- host/auth.sh@68 -- # keyid=4 00:19:19.878 14:24:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.878 14:24:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.878 14:24:01 -- common/autotest_common.sh@10 -- # set +x 00:19:19.878 14:24:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.878 14:24:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:19.878 14:24:01 -- nvmf/common.sh@717 -- # local ip 00:19:19.878 14:24:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:19.878 14:24:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:19.878 14:24:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:19.878 14:24:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:19.878 14:24:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:19.878 14:24:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:19.878 14:24:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:19.878 14:24:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:19.878 14:24:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:19.878 14:24:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:19.878 14:24:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.878 14:24:01 -- common/autotest_common.sh@10 -- # set +x 00:19:20.137 nvme0n1 00:19:20.137 14:24:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.137 14:24:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.137 14:24:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.137 14:24:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:20.137 14:24:01 -- common/autotest_common.sh@10 -- # set +x 00:19:20.137 14:24:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.137 14:24:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.137 14:24:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.137 14:24:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.137 14:24:01 -- common/autotest_common.sh@10 -- # set +x 00:19:20.137 14:24:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.137 14:24:01 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:20.137 14:24:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:20.137 14:24:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:20.137 14:24:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:20.137 14:24:01 -- host/auth.sh@44 -- # digest=sha512 00:19:20.137 14:24:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:20.137 14:24:01 -- host/auth.sh@44 -- # keyid=0 00:19:20.137 14:24:01 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:19:20.137 14:24:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:20.137 14:24:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:20.137 14:24:01 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:19:20.137 14:24:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:19:20.137 14:24:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:20.137 14:24:01 -- host/auth.sh@68 -- # digest=sha512 00:19:20.137 14:24:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:20.137 14:24:01 -- host/auth.sh@68 -- # keyid=0 00:19:20.137 14:24:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:20.137 14:24:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.137 14:24:01 -- common/autotest_common.sh@10 -- # set +x 00:19:20.137 14:24:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.137 14:24:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:20.137 14:24:01 -- nvmf/common.sh@717 -- # local ip 00:19:20.137 14:24:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:20.137 14:24:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:20.137 14:24:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.137 14:24:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.137 14:24:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:20.137 14:24:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.137 14:24:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:20.137 14:24:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:20.137 14:24:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:20.137 14:24:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:20.137 14:24:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.137 14:24:01 -- common/autotest_common.sh@10 -- # set +x 00:19:20.395 nvme0n1 00:19:20.395 14:24:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.395 14:24:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.395 14:24:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.395 14:24:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:20.395 14:24:01 -- common/autotest_common.sh@10 -- # set +x 00:19:20.395 14:24:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.395 14:24:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.395 14:24:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.395 14:24:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.395 14:24:01 -- common/autotest_common.sh@10 -- # set +x 00:19:20.395 14:24:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.395 14:24:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:20.395 14:24:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:20.395 14:24:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:20.395 14:24:01 -- host/auth.sh@44 -- # digest=sha512 00:19:20.395 14:24:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:20.395 14:24:01 -- host/auth.sh@44 -- # keyid=1 00:19:20.395 14:24:01 -- host/auth.sh@45 -- # key=DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:20.395 14:24:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:20.395 14:24:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:20.395 14:24:01 -- host/auth.sh@49 -- # echo DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:20.395 14:24:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:19:20.395 14:24:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:20.395 14:24:01 -- host/auth.sh@68 -- # digest=sha512 00:19:20.395 14:24:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:20.395 14:24:01 -- host/auth.sh@68 -- # keyid=1 00:19:20.395 14:24:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:20.395 14:24:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.396 14:24:01 -- common/autotest_common.sh@10 -- # set +x 00:19:20.396 14:24:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.396 14:24:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:20.396 14:24:01 -- nvmf/common.sh@717 -- # local ip 00:19:20.396 14:24:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:20.396 14:24:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:20.396 14:24:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.396 14:24:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.396 14:24:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:20.396 14:24:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.396 14:24:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:20.396 14:24:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:20.396 14:24:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:20.396 14:24:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:20.396 14:24:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.396 14:24:01 -- common/autotest_common.sh@10 -- # set +x 00:19:20.654 nvme0n1 00:19:20.654 14:24:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.654 14:24:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.654 14:24:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.654 14:24:02 -- common/autotest_common.sh@10 -- # set +x 00:19:20.654 14:24:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:20.654 14:24:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.654 14:24:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.654 14:24:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.654 14:24:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.654 14:24:02 -- common/autotest_common.sh@10 -- # set +x 00:19:20.654 14:24:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.654 14:24:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:20.654 14:24:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:20.654 14:24:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:20.654 14:24:02 -- host/auth.sh@44 -- # digest=sha512 00:19:20.654 14:24:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:20.654 14:24:02 -- host/auth.sh@44 -- # keyid=2 00:19:20.654 14:24:02 -- host/auth.sh@45 -- # key=DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:20.654 14:24:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:20.654 14:24:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:20.654 14:24:02 -- host/auth.sh@49 -- # echo DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:20.654 14:24:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:19:20.654 14:24:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:20.654 14:24:02 -- host/auth.sh@68 -- # digest=sha512 00:19:20.654 14:24:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:20.654 14:24:02 -- host/auth.sh@68 -- # keyid=2 00:19:20.654 14:24:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:20.654 14:24:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.654 14:24:02 -- common/autotest_common.sh@10 -- # set +x 00:19:20.654 14:24:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.654 14:24:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:20.654 14:24:02 -- nvmf/common.sh@717 -- # local ip 00:19:20.654 14:24:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:20.654 14:24:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:20.654 14:24:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:20.654 14:24:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:20.654 14:24:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:20.654 14:24:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:20.654 14:24:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:20.654 14:24:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:20.654 14:24:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:20.654 14:24:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:20.654 14:24:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.654 14:24:02 -- common/autotest_common.sh@10 -- # set +x 00:19:21.221 nvme0n1 00:19:21.221 14:24:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.221 14:24:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.221 14:24:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:21.221 14:24:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.221 14:24:02 -- common/autotest_common.sh@10 -- # set +x 00:19:21.221 14:24:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.221 14:24:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.221 14:24:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.221 14:24:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.221 14:24:02 -- common/autotest_common.sh@10 -- # set +x 00:19:21.221 14:24:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.221 14:24:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:21.221 14:24:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:21.221 14:24:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:21.221 14:24:02 -- host/auth.sh@44 -- # digest=sha512 00:19:21.221 14:24:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:21.221 14:24:02 -- host/auth.sh@44 -- # keyid=3 00:19:21.221 14:24:02 -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:21.221 14:24:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:21.221 14:24:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:21.221 14:24:02 -- host/auth.sh@49 -- # echo DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:21.221 14:24:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:19:21.221 14:24:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:21.221 14:24:02 -- host/auth.sh@68 -- # digest=sha512 00:19:21.221 14:24:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:21.221 14:24:02 -- host/auth.sh@68 -- # keyid=3 00:19:21.221 14:24:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:21.221 14:24:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.221 14:24:02 -- common/autotest_common.sh@10 -- # set +x 00:19:21.221 14:24:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.221 14:24:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:21.221 14:24:02 -- nvmf/common.sh@717 -- # local ip 00:19:21.221 14:24:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:21.221 14:24:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:21.221 14:24:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.221 14:24:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.221 14:24:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:21.222 14:24:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.222 14:24:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:21.222 14:24:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:21.222 14:24:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:21.222 14:24:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:21.222 14:24:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.222 14:24:02 -- common/autotest_common.sh@10 -- # set +x 00:19:21.480 nvme0n1 00:19:21.480 14:24:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.480 14:24:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.480 14:24:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:21.480 14:24:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.480 14:24:02 -- common/autotest_common.sh@10 -- # set +x 00:19:21.480 14:24:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.480 14:24:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.480 14:24:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.480 14:24:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.480 14:24:02 -- common/autotest_common.sh@10 -- # set +x 00:19:21.480 14:24:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.480 14:24:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:21.480 14:24:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:21.480 14:24:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:21.480 14:24:02 -- host/auth.sh@44 -- # digest=sha512 00:19:21.480 14:24:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:21.480 14:24:02 -- host/auth.sh@44 -- # keyid=4 00:19:21.480 14:24:02 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:21.480 14:24:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:21.480 14:24:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:19:21.480 14:24:02 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:21.480 14:24:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:19:21.480 14:24:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:21.480 14:24:02 -- host/auth.sh@68 -- # digest=sha512 00:19:21.480 14:24:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:19:21.480 14:24:02 -- host/auth.sh@68 -- # keyid=4 00:19:21.480 14:24:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:21.480 14:24:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.480 14:24:02 -- common/autotest_common.sh@10 -- # set +x 00:19:21.480 14:24:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.480 14:24:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:21.480 14:24:02 -- nvmf/common.sh@717 -- # local ip 00:19:21.480 14:24:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:21.480 14:24:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:21.480 14:24:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.480 14:24:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.480 14:24:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:21.480 14:24:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.480 14:24:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:21.480 14:24:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:21.480 14:24:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:21.480 14:24:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:21.480 14:24:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.480 14:24:02 -- common/autotest_common.sh@10 -- # set +x 00:19:21.738 nvme0n1 00:19:21.738 14:24:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.738 14:24:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.738 14:24:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:21.738 14:24:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.738 14:24:03 -- common/autotest_common.sh@10 -- # set +x 00:19:21.738 14:24:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.738 14:24:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.738 14:24:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.738 14:24:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.738 14:24:03 -- common/autotest_common.sh@10 -- # set +x 00:19:21.738 14:24:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.738 14:24:03 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.738 14:24:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:21.738 14:24:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:21.738 14:24:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:21.738 14:24:03 -- host/auth.sh@44 -- # digest=sha512 00:19:21.738 14:24:03 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:21.738 14:24:03 -- host/auth.sh@44 -- # keyid=0 00:19:21.738 14:24:03 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:19:21.738 14:24:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:21.738 14:24:03 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:21.738 14:24:03 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:19:21.738 14:24:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:19:21.738 14:24:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:21.738 14:24:03 -- host/auth.sh@68 -- # digest=sha512 00:19:21.738 14:24:03 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:21.738 14:24:03 -- host/auth.sh@68 -- # keyid=0 00:19:21.738 14:24:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:21.738 14:24:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.738 14:24:03 -- common/autotest_common.sh@10 -- # set +x 00:19:21.738 14:24:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.738 14:24:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:21.738 14:24:03 -- nvmf/common.sh@717 -- # local ip 00:19:21.738 14:24:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:21.738 14:24:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:21.738 14:24:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:21.738 14:24:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:21.738 14:24:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:21.738 14:24:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:21.738 14:24:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:21.738 14:24:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:21.738 14:24:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:21.738 14:24:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:21.738 14:24:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.738 14:24:03 -- common/autotest_common.sh@10 -- # set +x 00:19:22.304 nvme0n1 00:19:22.304 14:24:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.304 14:24:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.304 14:24:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:22.304 14:24:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.304 14:24:03 -- common/autotest_common.sh@10 -- # set +x 00:19:22.304 14:24:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.304 14:24:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.304 14:24:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.304 14:24:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.304 14:24:03 -- common/autotest_common.sh@10 -- # set +x 00:19:22.304 14:24:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.304 14:24:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:22.304 14:24:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:22.304 14:24:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:22.304 14:24:03 -- host/auth.sh@44 -- # digest=sha512 00:19:22.304 14:24:03 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:22.304 14:24:03 -- host/auth.sh@44 -- # keyid=1 00:19:22.304 14:24:03 -- host/auth.sh@45 -- # key=DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:22.304 14:24:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:22.304 14:24:03 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:22.304 14:24:03 -- host/auth.sh@49 -- # echo DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:22.304 14:24:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:19:22.304 14:24:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:22.304 14:24:03 -- host/auth.sh@68 -- # digest=sha512 00:19:22.304 14:24:03 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:22.304 14:24:03 -- host/auth.sh@68 -- # keyid=1 00:19:22.304 14:24:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:22.304 14:24:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.304 14:24:03 -- common/autotest_common.sh@10 -- # set +x 00:19:22.304 14:24:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.304 14:24:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:22.305 14:24:03 -- nvmf/common.sh@717 -- # local ip 00:19:22.305 14:24:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:22.305 14:24:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:22.305 14:24:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:22.305 14:24:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:22.305 14:24:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:22.305 14:24:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:22.305 14:24:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:22.305 14:24:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:22.305 14:24:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:22.305 14:24:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:22.305 14:24:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.305 14:24:03 -- common/autotest_common.sh@10 -- # set +x 00:19:23.238 nvme0n1 00:19:23.238 14:24:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.238 14:24:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.238 14:24:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.238 14:24:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:23.238 14:24:04 -- common/autotest_common.sh@10 -- # set +x 00:19:23.238 14:24:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.238 14:24:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.238 14:24:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.238 14:24:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.238 14:24:04 -- common/autotest_common.sh@10 -- # set +x 00:19:23.238 14:24:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.238 14:24:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:23.238 14:24:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:23.238 14:24:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:23.238 14:24:04 -- host/auth.sh@44 -- # digest=sha512 00:19:23.238 14:24:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:23.238 14:24:04 -- host/auth.sh@44 -- # keyid=2 00:19:23.238 14:24:04 -- host/auth.sh@45 -- # key=DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:23.238 14:24:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:23.238 14:24:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:23.238 14:24:04 -- host/auth.sh@49 -- # echo DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:23.238 14:24:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:19:23.238 14:24:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:23.238 14:24:04 -- host/auth.sh@68 -- # digest=sha512 00:19:23.238 14:24:04 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:23.238 14:24:04 -- host/auth.sh@68 -- # keyid=2 00:19:23.238 14:24:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:23.238 14:24:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.238 14:24:04 -- common/autotest_common.sh@10 -- # set +x 00:19:23.238 14:24:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.238 14:24:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:23.238 14:24:04 -- nvmf/common.sh@717 -- # local ip 00:19:23.238 14:24:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:23.238 14:24:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:23.238 14:24:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.238 14:24:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.238 14:24:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:23.238 14:24:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.238 14:24:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:23.238 14:24:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:23.238 14:24:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:23.238 14:24:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:23.238 14:24:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.238 14:24:04 -- common/autotest_common.sh@10 -- # set +x 00:19:23.804 nvme0n1 00:19:23.804 14:24:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.804 14:24:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.804 14:24:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.804 14:24:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:23.804 14:24:05 -- common/autotest_common.sh@10 -- # set +x 00:19:23.804 14:24:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.804 14:24:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.804 14:24:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.804 14:24:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.804 14:24:05 -- common/autotest_common.sh@10 -- # set +x 00:19:23.804 14:24:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.804 14:24:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:23.804 14:24:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:23.804 14:24:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:23.804 14:24:05 -- host/auth.sh@44 -- # digest=sha512 00:19:23.804 14:24:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:23.804 14:24:05 -- host/auth.sh@44 -- # keyid=3 00:19:23.804 14:24:05 -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:23.804 14:24:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:23.804 14:24:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:23.804 14:24:05 -- host/auth.sh@49 -- # echo DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:23.804 14:24:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:19:23.804 14:24:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:23.804 14:24:05 -- host/auth.sh@68 -- # digest=sha512 00:19:23.804 14:24:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:23.804 14:24:05 -- host/auth.sh@68 -- # keyid=3 00:19:23.804 14:24:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:23.804 14:24:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.804 14:24:05 -- common/autotest_common.sh@10 -- # set +x 00:19:23.804 14:24:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:23.804 14:24:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:23.804 14:24:05 -- nvmf/common.sh@717 -- # local ip 00:19:23.804 14:24:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:23.804 14:24:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:23.804 14:24:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:23.804 14:24:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:23.804 14:24:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:23.804 14:24:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:23.804 14:24:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:23.804 14:24:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:23.804 14:24:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:23.804 14:24:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:23.804 14:24:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:23.804 14:24:05 -- common/autotest_common.sh@10 -- # set +x 00:19:24.370 nvme0n1 00:19:24.370 14:24:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.370 14:24:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:24.370 14:24:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.370 14:24:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.370 14:24:05 -- common/autotest_common.sh@10 -- # set +x 00:19:24.370 14:24:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.370 14:24:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.370 14:24:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.370 14:24:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.370 14:24:05 -- common/autotest_common.sh@10 -- # set +x 00:19:24.370 14:24:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.370 14:24:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:24.370 14:24:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:24.370 14:24:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:24.370 14:24:05 -- host/auth.sh@44 -- # digest=sha512 00:19:24.370 14:24:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:24.370 14:24:05 -- host/auth.sh@44 -- # keyid=4 00:19:24.370 14:24:05 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:24.370 14:24:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:24.370 14:24:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:19:24.370 14:24:05 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:24.370 14:24:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:19:24.370 14:24:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:24.370 14:24:05 -- host/auth.sh@68 -- # digest=sha512 00:19:24.370 14:24:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:19:24.370 14:24:05 -- host/auth.sh@68 -- # keyid=4 00:19:24.370 14:24:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:24.370 14:24:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.370 14:24:05 -- common/autotest_common.sh@10 -- # set +x 00:19:24.370 14:24:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.370 14:24:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:24.370 14:24:05 -- nvmf/common.sh@717 -- # local ip 00:19:24.370 14:24:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:24.370 14:24:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:24.370 14:24:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.370 14:24:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.370 14:24:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:24.370 14:24:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.370 14:24:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:24.370 14:24:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:24.370 14:24:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:24.370 14:24:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:24.370 14:24:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.370 14:24:05 -- common/autotest_common.sh@10 -- # set +x 00:19:24.935 nvme0n1 00:19:24.935 14:24:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.935 14:24:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.935 14:24:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:24.935 14:24:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.935 14:24:06 -- common/autotest_common.sh@10 -- # set +x 00:19:24.935 14:24:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.935 14:24:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.935 14:24:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.935 14:24:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.935 14:24:06 -- common/autotest_common.sh@10 -- # set +x 00:19:24.935 14:24:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.935 14:24:06 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.935 14:24:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:24.935 14:24:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:24.935 14:24:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:24.935 14:24:06 -- host/auth.sh@44 -- # digest=sha512 00:19:24.935 14:24:06 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:24.935 14:24:06 -- host/auth.sh@44 -- # keyid=0 00:19:24.935 14:24:06 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:19:24.935 14:24:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:24.935 14:24:06 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:24.935 14:24:06 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGJjODdlNjU2MjAzYjkyMTczNTUwYTJiNjZjOTMyMDbm2rx6: 00:19:24.935 14:24:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:19:24.935 14:24:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:24.935 14:24:06 -- host/auth.sh@68 -- # digest=sha512 00:19:24.935 14:24:06 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:24.935 14:24:06 -- host/auth.sh@68 -- # keyid=0 00:19:24.935 14:24:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:24.935 14:24:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.935 14:24:06 -- common/autotest_common.sh@10 -- # set +x 00:19:24.935 14:24:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:24.935 14:24:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:24.935 14:24:06 -- nvmf/common.sh@717 -- # local ip 00:19:24.935 14:24:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:24.935 14:24:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:24.935 14:24:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:24.935 14:24:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:24.935 14:24:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:24.935 14:24:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:24.935 14:24:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:24.935 14:24:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:24.935 14:24:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:24.935 14:24:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:19:24.935 14:24:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:24.935 14:24:06 -- common/autotest_common.sh@10 -- # set +x 00:19:26.305 nvme0n1 00:19:26.305 14:24:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.305 14:24:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.305 14:24:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:26.305 14:24:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.305 14:24:07 -- common/autotest_common.sh@10 -- # set +x 00:19:26.305 14:24:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.305 14:24:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.305 14:24:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.305 14:24:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.305 14:24:07 -- common/autotest_common.sh@10 -- # set +x 00:19:26.305 14:24:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.305 14:24:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:26.305 14:24:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:26.305 14:24:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:26.305 14:24:07 -- host/auth.sh@44 -- # digest=sha512 00:19:26.305 14:24:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:26.305 14:24:07 -- host/auth.sh@44 -- # keyid=1 00:19:26.305 14:24:07 -- host/auth.sh@45 -- # key=DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:26.305 14:24:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:26.305 14:24:07 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:26.305 14:24:07 -- host/auth.sh@49 -- # echo DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:26.305 14:24:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:19:26.305 14:24:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:26.305 14:24:07 -- host/auth.sh@68 -- # digest=sha512 00:19:26.305 14:24:07 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:26.305 14:24:07 -- host/auth.sh@68 -- # keyid=1 00:19:26.305 14:24:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:26.305 14:24:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.305 14:24:07 -- common/autotest_common.sh@10 -- # set +x 00:19:26.305 14:24:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:26.305 14:24:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:26.305 14:24:07 -- nvmf/common.sh@717 -- # local ip 00:19:26.305 14:24:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:26.305 14:24:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:26.305 14:24:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:26.305 14:24:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:26.305 14:24:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:26.305 14:24:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:26.305 14:24:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:26.305 14:24:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:26.305 14:24:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:26.305 14:24:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:19:26.305 14:24:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:26.305 14:24:07 -- common/autotest_common.sh@10 -- # set +x 00:19:27.239 nvme0n1 00:19:27.239 14:24:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.239 14:24:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.239 14:24:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:27.239 14:24:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.239 14:24:08 -- common/autotest_common.sh@10 -- # set +x 00:19:27.239 14:24:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.239 14:24:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.239 14:24:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.239 14:24:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.239 14:24:08 -- common/autotest_common.sh@10 -- # set +x 00:19:27.239 14:24:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.239 14:24:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:27.239 14:24:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:27.239 14:24:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:27.239 14:24:08 -- host/auth.sh@44 -- # digest=sha512 00:19:27.239 14:24:08 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:27.239 14:24:08 -- host/auth.sh@44 -- # keyid=2 00:19:27.239 14:24:08 -- host/auth.sh@45 -- # key=DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:27.239 14:24:08 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:27.239 14:24:08 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:27.239 14:24:08 -- host/auth.sh@49 -- # echo DHHC-1:01:MDExNjk1NjJlYjAzOTgwYWNmOTM4NDhiNWRlNDZjNWNqlIbl: 00:19:27.239 14:24:08 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:19:27.239 14:24:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:27.239 14:24:08 -- host/auth.sh@68 -- # digest=sha512 00:19:27.239 14:24:08 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:27.239 14:24:08 -- host/auth.sh@68 -- # keyid=2 00:19:27.239 14:24:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:27.239 14:24:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.239 14:24:08 -- common/autotest_common.sh@10 -- # set +x 00:19:27.239 14:24:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:27.239 14:24:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:27.239 14:24:08 -- nvmf/common.sh@717 -- # local ip 00:19:27.239 14:24:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:27.239 14:24:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:27.239 14:24:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.239 14:24:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.239 14:24:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:27.239 14:24:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.239 14:24:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:27.239 14:24:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:27.239 14:24:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:27.239 14:24:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:27.239 14:24:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:27.239 14:24:08 -- common/autotest_common.sh@10 -- # set +x 00:19:28.612 nvme0n1 00:19:28.612 14:24:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.612 14:24:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.612 14:24:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.612 14:24:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:28.612 14:24:09 -- common/autotest_common.sh@10 -- # set +x 00:19:28.612 14:24:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.612 14:24:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.612 14:24:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.612 14:24:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.612 14:24:09 -- common/autotest_common.sh@10 -- # set +x 00:19:28.612 14:24:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.612 14:24:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:28.612 14:24:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:28.612 14:24:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:28.612 14:24:09 -- host/auth.sh@44 -- # digest=sha512 00:19:28.612 14:24:09 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:28.612 14:24:09 -- host/auth.sh@44 -- # keyid=3 00:19:28.612 14:24:09 -- host/auth.sh@45 -- # key=DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:28.612 14:24:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:28.612 14:24:09 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:28.612 14:24:09 -- host/auth.sh@49 -- # echo DHHC-1:02:YTYxNjk0MDU1MTY1ZjJjNThjMGRiYzdkMjQ5MmQwNDY3MTIwZmUxOGI1YTE1OGU0l16lqw==: 00:19:28.612 14:24:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:19:28.612 14:24:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:28.612 14:24:09 -- host/auth.sh@68 -- # digest=sha512 00:19:28.612 14:24:09 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:28.612 14:24:09 -- host/auth.sh@68 -- # keyid=3 00:19:28.612 14:24:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:28.612 14:24:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.612 14:24:09 -- common/autotest_common.sh@10 -- # set +x 00:19:28.612 14:24:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.612 14:24:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:28.612 14:24:09 -- nvmf/common.sh@717 -- # local ip 00:19:28.612 14:24:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:28.612 14:24:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:28.612 14:24:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:28.612 14:24:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:28.612 14:24:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:28.612 14:24:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:28.612 14:24:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:28.612 14:24:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:28.612 14:24:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:28.612 14:24:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:19:28.612 14:24:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.612 14:24:09 -- common/autotest_common.sh@10 -- # set +x 00:19:29.545 nvme0n1 00:19:29.545 14:24:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.545 14:24:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:29.545 14:24:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.545 14:24:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.545 14:24:11 -- common/autotest_common.sh@10 -- # set +x 00:19:29.545 14:24:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.545 14:24:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.545 14:24:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.545 14:24:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.545 14:24:11 -- common/autotest_common.sh@10 -- # set +x 00:19:29.545 14:24:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.545 14:24:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:19:29.545 14:24:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:29.545 14:24:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:29.545 14:24:11 -- host/auth.sh@44 -- # digest=sha512 00:19:29.545 14:24:11 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:29.545 14:24:11 -- host/auth.sh@44 -- # keyid=4 00:19:29.545 14:24:11 -- host/auth.sh@45 -- # key=DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:29.545 14:24:11 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:19:29.545 14:24:11 -- host/auth.sh@48 -- # echo ffdhe8192 00:19:29.545 14:24:11 -- host/auth.sh@49 -- # echo DHHC-1:03:ZjIxNjUxMjIxYzFlYzYwMjA1NjA4YWM3ZmI5MmMzZjJhZDQzMzNiM2ZkOGQzNzkyZWQzY2ExZjEzOGU4ZDg2OIJRYBU=: 00:19:29.545 14:24:11 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:19:29.545 14:24:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:19:29.545 14:24:11 -- host/auth.sh@68 -- # digest=sha512 00:19:29.545 14:24:11 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:19:29.545 14:24:11 -- host/auth.sh@68 -- # keyid=4 00:19:29.545 14:24:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:29.545 14:24:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.545 14:24:11 -- common/autotest_common.sh@10 -- # set +x 00:19:29.545 14:24:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.545 14:24:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:19:29.545 14:24:11 -- nvmf/common.sh@717 -- # local ip 00:19:29.545 14:24:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:29.545 14:24:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:29.545 14:24:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:29.545 14:24:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:29.545 14:24:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:29.545 14:24:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:29.545 14:24:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:29.545 14:24:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:29.545 14:24:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:29.545 14:24:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:29.545 14:24:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.545 14:24:11 -- common/autotest_common.sh@10 -- # set +x 00:19:30.921 nvme0n1 00:19:30.921 14:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.921 14:24:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.921 14:24:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:19:30.921 14:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.921 14:24:12 -- common/autotest_common.sh@10 -- # set +x 00:19:30.921 14:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.921 14:24:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.921 14:24:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.921 14:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.921 14:24:12 -- common/autotest_common.sh@10 -- # set +x 00:19:30.921 14:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.921 14:24:12 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:30.921 14:24:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:19:30.921 14:24:12 -- host/auth.sh@44 -- # digest=sha256 00:19:30.921 14:24:12 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:30.921 14:24:12 -- host/auth.sh@44 -- # keyid=1 00:19:30.921 14:24:12 -- host/auth.sh@45 -- # key=DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:30.921 14:24:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:19:30.921 14:24:12 -- host/auth.sh@48 -- # echo ffdhe2048 00:19:30.921 14:24:12 -- host/auth.sh@49 -- # echo DHHC-1:00:N2UwZDU3M2JjMGEyOTA0MTZiYTllYzgzOTQ0N2JjMzNjNTA0OTQyMmQwMWZiNzRjqfQNww==: 00:19:30.921 14:24:12 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:30.921 14:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.921 14:24:12 -- common/autotest_common.sh@10 -- # set +x 00:19:30.921 14:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.921 14:24:12 -- host/auth.sh@119 -- # get_main_ns_ip 00:19:30.921 14:24:12 -- nvmf/common.sh@717 -- # local ip 00:19:30.921 14:24:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:30.921 14:24:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:30.921 14:24:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.921 14:24:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.921 14:24:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:30.921 14:24:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.921 14:24:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:30.921 14:24:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:30.921 14:24:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:30.921 14:24:12 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:30.921 14:24:12 -- common/autotest_common.sh@638 -- # local es=0 00:19:30.921 14:24:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:30.921 14:24:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:30.921 14:24:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:30.921 14:24:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:30.921 14:24:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:30.921 14:24:12 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:30.921 14:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.921 14:24:12 -- common/autotest_common.sh@10 -- # set +x 00:19:30.921 request: 00:19:30.921 { 00:19:30.921 "name": "nvme0", 00:19:30.921 "trtype": "tcp", 00:19:30.921 "traddr": "10.0.0.1", 00:19:30.921 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:30.921 "adrfam": "ipv4", 00:19:30.921 "trsvcid": "4420", 00:19:30.921 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:30.921 "method": "bdev_nvme_attach_controller", 00:19:30.921 "req_id": 1 00:19:30.921 } 00:19:30.921 Got JSON-RPC error response 00:19:30.921 response: 00:19:30.921 { 00:19:30.921 "code": -32602, 00:19:30.921 "message": "Invalid parameters" 00:19:30.921 } 00:19:30.921 14:24:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:30.921 14:24:12 -- common/autotest_common.sh@641 -- # es=1 00:19:30.921 14:24:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:30.921 14:24:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:30.921 14:24:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:30.921 14:24:12 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.921 14:24:12 -- host/auth.sh@121 -- # jq length 00:19:30.921 14:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.921 14:24:12 -- common/autotest_common.sh@10 -- # set +x 00:19:30.921 14:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.921 14:24:12 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:19:30.921 14:24:12 -- host/auth.sh@124 -- # get_main_ns_ip 00:19:30.921 14:24:12 -- nvmf/common.sh@717 -- # local ip 00:19:30.921 14:24:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:19:30.921 14:24:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:19:30.921 14:24:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:30.921 14:24:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:30.921 14:24:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:19:30.921 14:24:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:30.921 14:24:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:19:30.921 14:24:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:19:30.921 14:24:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:19:30.921 14:24:12 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:30.921 14:24:12 -- common/autotest_common.sh@638 -- # local es=0 00:19:30.921 14:24:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:30.921 14:24:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:30.921 14:24:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:30.921 14:24:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:30.921 14:24:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:30.921 14:24:12 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:30.921 14:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.921 14:24:12 -- common/autotest_common.sh@10 -- # set +x 00:19:30.921 request: 00:19:30.921 { 00:19:30.921 "name": "nvme0", 00:19:30.921 "trtype": "tcp", 00:19:30.921 "traddr": "10.0.0.1", 00:19:30.921 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:30.921 "adrfam": "ipv4", 00:19:30.921 "trsvcid": "4420", 00:19:30.921 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:30.921 "dhchap_key": "key2", 00:19:30.921 "method": "bdev_nvme_attach_controller", 00:19:30.921 "req_id": 1 00:19:30.921 } 00:19:30.921 Got JSON-RPC error response 00:19:30.921 response: 00:19:30.921 { 00:19:30.921 "code": -32602, 00:19:30.921 "message": "Invalid parameters" 00:19:30.921 } 00:19:30.921 14:24:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:30.921 14:24:12 -- common/autotest_common.sh@641 -- # es=1 00:19:30.921 14:24:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:30.921 14:24:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:30.921 14:24:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:30.921 14:24:12 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.921 14:24:12 -- host/auth.sh@127 -- # jq length 00:19:30.921 14:24:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.921 14:24:12 -- common/autotest_common.sh@10 -- # set +x 00:19:30.921 14:24:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.921 14:24:12 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:19:30.921 14:24:12 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:19:30.921 14:24:12 -- host/auth.sh@130 -- # cleanup 00:19:30.921 14:24:12 -- host/auth.sh@24 -- # nvmftestfini 00:19:30.921 14:24:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:30.921 14:24:12 -- nvmf/common.sh@117 -- # sync 00:19:30.921 14:24:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:30.921 14:24:12 -- nvmf/common.sh@120 -- # set +e 00:19:30.921 14:24:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:30.921 14:24:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:30.921 rmmod nvme_tcp 00:19:30.921 rmmod nvme_fabrics 00:19:30.921 14:24:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:30.921 14:24:12 -- nvmf/common.sh@124 -- # set -e 00:19:30.921 14:24:12 -- nvmf/common.sh@125 -- # return 0 00:19:30.921 14:24:12 -- nvmf/common.sh@478 -- # '[' -n 3197652 ']' 00:19:30.921 14:24:12 -- nvmf/common.sh@479 -- # killprocess 3197652 00:19:30.921 14:24:12 -- common/autotest_common.sh@936 -- # '[' -z 3197652 ']' 00:19:30.921 14:24:12 -- common/autotest_common.sh@940 -- # kill -0 3197652 00:19:30.921 14:24:12 -- common/autotest_common.sh@941 -- # uname 00:19:30.921 14:24:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:30.921 14:24:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3197652 00:19:30.921 14:24:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:30.921 14:24:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:30.921 14:24:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3197652' 00:19:30.921 killing process with pid 3197652 00:19:30.921 14:24:12 -- common/autotest_common.sh@955 -- # kill 3197652 00:19:30.921 14:24:12 -- common/autotest_common.sh@960 -- # wait 3197652 00:19:31.181 14:24:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:31.181 14:24:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:31.181 14:24:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:31.181 14:24:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:31.181 14:24:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:31.181 14:24:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.181 14:24:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.181 14:24:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.716 14:24:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:33.716 14:24:14 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:33.716 14:24:14 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:33.716 14:24:14 -- host/auth.sh@27 -- # clean_kernel_target 00:19:33.716 14:24:14 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:33.716 14:24:14 -- nvmf/common.sh@675 -- # echo 0 00:19:33.716 14:24:14 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:33.716 14:24:14 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:33.716 14:24:14 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:33.716 14:24:14 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:33.716 14:24:14 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:19:33.716 14:24:14 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:19:33.716 14:24:14 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:19:34.282 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:19:34.282 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:19:34.282 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:19:34.282 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:19:34.282 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:19:34.282 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:19:34.282 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:19:34.282 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:19:34.282 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:19:34.282 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:19:34.282 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:19:34.282 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:19:34.282 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:19:34.282 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:19:34.282 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:19:34.282 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:19:35.219 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:19:35.478 14:24:16 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.rdu /tmp/spdk.key-null.ZrV /tmp/spdk.key-sha256.qkX /tmp/spdk.key-sha384.6HJ /tmp/spdk.key-sha512.2i5 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:19:35.478 14:24:16 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:19:36.413 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:19:36.413 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:19:36.413 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:19:36.413 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:19:36.413 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:19:36.413 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:19:36.413 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:19:36.413 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:19:36.413 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:19:36.413 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:19:36.413 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:19:36.413 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:19:36.413 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:19:36.413 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:19:36.413 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:19:36.413 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:19:36.413 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:19:36.413 00:19:36.413 real 0m49.691s 00:19:36.413 user 0m47.409s 00:19:36.413 sys 0m4.978s 00:19:36.413 14:24:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:36.413 14:24:17 -- common/autotest_common.sh@10 -- # set +x 00:19:36.413 ************************************ 00:19:36.413 END TEST nvmf_auth 00:19:36.413 ************************************ 00:19:36.413 14:24:17 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:19:36.413 14:24:17 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:36.413 14:24:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:36.413 14:24:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:36.413 14:24:17 -- common/autotest_common.sh@10 -- # set +x 00:19:36.413 ************************************ 00:19:36.413 START TEST nvmf_digest 00:19:36.413 ************************************ 00:19:36.413 14:24:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:36.673 * Looking for test storage... 00:19:36.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:36.673 14:24:18 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:36.673 14:24:18 -- nvmf/common.sh@7 -- # uname -s 00:19:36.673 14:24:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:36.673 14:24:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:36.673 14:24:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:36.673 14:24:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:36.673 14:24:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:36.673 14:24:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:36.673 14:24:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:36.673 14:24:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:36.673 14:24:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:36.673 14:24:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:36.673 14:24:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:36.673 14:24:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:36.673 14:24:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:36.673 14:24:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:36.673 14:24:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:36.673 14:24:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:36.673 14:24:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:36.673 14:24:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:36.673 14:24:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:36.673 14:24:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:36.673 14:24:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.673 14:24:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.673 14:24:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.673 14:24:18 -- paths/export.sh@5 -- # export PATH 00:19:36.673 14:24:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.673 14:24:18 -- nvmf/common.sh@47 -- # : 0 00:19:36.673 14:24:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:36.673 14:24:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:36.673 14:24:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:36.673 14:24:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:36.673 14:24:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:36.673 14:24:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:36.673 14:24:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:36.673 14:24:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:36.673 14:24:18 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:36.673 14:24:18 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:19:36.673 14:24:18 -- host/digest.sh@16 -- # runtime=2 00:19:36.673 14:24:18 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:19:36.673 14:24:18 -- host/digest.sh@138 -- # nvmftestinit 00:19:36.673 14:24:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:36.673 14:24:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:36.673 14:24:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:36.673 14:24:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:36.673 14:24:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:36.673 14:24:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.673 14:24:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:36.673 14:24:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.673 14:24:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:36.673 14:24:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:36.673 14:24:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:36.673 14:24:18 -- common/autotest_common.sh@10 -- # set +x 00:19:38.050 14:24:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:38.050 14:24:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:38.050 14:24:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:38.050 14:24:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:38.050 14:24:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:38.050 14:24:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:38.050 14:24:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:38.050 14:24:19 -- nvmf/common.sh@295 -- # net_devs=() 00:19:38.050 14:24:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:38.050 14:24:19 -- nvmf/common.sh@296 -- # e810=() 00:19:38.050 14:24:19 -- nvmf/common.sh@296 -- # local -ga e810 00:19:38.050 14:24:19 -- nvmf/common.sh@297 -- # x722=() 00:19:38.050 14:24:19 -- nvmf/common.sh@297 -- # local -ga x722 00:19:38.050 14:24:19 -- nvmf/common.sh@298 -- # mlx=() 00:19:38.050 14:24:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:38.050 14:24:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:38.050 14:24:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:38.050 14:24:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:38.050 14:24:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:38.050 14:24:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:38.050 14:24:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:38.050 14:24:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:38.050 14:24:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:38.050 14:24:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:38.050 14:24:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:38.050 14:24:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:38.050 14:24:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:38.050 14:24:19 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:38.050 14:24:19 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:38.050 14:24:19 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:38.050 14:24:19 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:38.050 14:24:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:38.050 14:24:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:38.050 14:24:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:38.050 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:38.050 14:24:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:38.050 14:24:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:38.050 14:24:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.050 14:24:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.050 14:24:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:38.050 14:24:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:38.050 14:24:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:38.050 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:38.050 14:24:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:38.050 14:24:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:38.050 14:24:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.050 14:24:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.050 14:24:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:38.050 14:24:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:38.050 14:24:19 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:38.050 14:24:19 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:38.050 14:24:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:38.050 14:24:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.050 14:24:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:38.050 14:24:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.050 14:24:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:38.050 Found net devices under 0000:08:00.0: cvl_0_0 00:19:38.050 14:24:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.050 14:24:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:38.050 14:24:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.050 14:24:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:38.050 14:24:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.050 14:24:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:38.050 Found net devices under 0000:08:00.1: cvl_0_1 00:19:38.050 14:24:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.050 14:24:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:38.050 14:24:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:38.050 14:24:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:38.050 14:24:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:38.050 14:24:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:38.050 14:24:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.050 14:24:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:38.050 14:24:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:38.050 14:24:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:38.050 14:24:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:38.050 14:24:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:38.050 14:24:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:38.050 14:24:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:38.050 14:24:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.050 14:24:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:38.050 14:24:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:38.309 14:24:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:38.309 14:24:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:38.309 14:24:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:38.309 14:24:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:38.309 14:24:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:38.309 14:24:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:38.309 14:24:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:38.309 14:24:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:38.309 14:24:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:38.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:19:38.309 00:19:38.309 --- 10.0.0.2 ping statistics --- 00:19:38.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.309 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:19:38.309 14:24:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:38.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:38.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:19:38.309 00:19:38.309 --- 10.0.0.1 ping statistics --- 00:19:38.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.309 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:19:38.309 14:24:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.309 14:24:19 -- nvmf/common.sh@411 -- # return 0 00:19:38.309 14:24:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:38.309 14:24:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.309 14:24:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:38.309 14:24:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:38.309 14:24:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.309 14:24:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:38.309 14:24:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:38.309 14:24:19 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:38.309 14:24:19 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:19:38.309 14:24:19 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:19:38.309 14:24:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:38.309 14:24:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:38.309 14:24:19 -- common/autotest_common.sh@10 -- # set +x 00:19:38.309 ************************************ 00:19:38.309 START TEST nvmf_digest_clean 00:19:38.309 ************************************ 00:19:38.309 14:24:19 -- common/autotest_common.sh@1111 -- # run_digest 00:19:38.309 14:24:19 -- host/digest.sh@120 -- # local dsa_initiator 00:19:38.309 14:24:19 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:19:38.309 14:24:19 -- host/digest.sh@121 -- # dsa_initiator=false 00:19:38.309 14:24:19 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:19:38.309 14:24:19 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:19:38.310 14:24:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:38.310 14:24:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:38.310 14:24:19 -- common/autotest_common.sh@10 -- # set +x 00:19:38.310 14:24:19 -- nvmf/common.sh@470 -- # nvmfpid=3205817 00:19:38.310 14:24:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:38.310 14:24:19 -- nvmf/common.sh@471 -- # waitforlisten 3205817 00:19:38.310 14:24:19 -- common/autotest_common.sh@817 -- # '[' -z 3205817 ']' 00:19:38.310 14:24:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.310 14:24:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:38.310 14:24:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.310 14:24:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:38.310 14:24:19 -- common/autotest_common.sh@10 -- # set +x 00:19:38.568 [2024-04-26 14:24:19.909823] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:19:38.568 [2024-04-26 14:24:19.909918] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.568 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.568 [2024-04-26 14:24:19.978816] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.568 [2024-04-26 14:24:20.096279] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.568 [2024-04-26 14:24:20.096338] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.568 [2024-04-26 14:24:20.096355] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.568 [2024-04-26 14:24:20.096369] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.568 [2024-04-26 14:24:20.096382] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.568 [2024-04-26 14:24:20.096414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.826 14:24:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:38.826 14:24:20 -- common/autotest_common.sh@850 -- # return 0 00:19:38.826 14:24:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:38.826 14:24:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:38.826 14:24:20 -- common/autotest_common.sh@10 -- # set +x 00:19:38.826 14:24:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.826 14:24:20 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:38.826 14:24:20 -- host/digest.sh@126 -- # common_target_config 00:19:38.826 14:24:20 -- host/digest.sh@43 -- # rpc_cmd 00:19:38.826 14:24:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.826 14:24:20 -- common/autotest_common.sh@10 -- # set +x 00:19:38.826 null0 00:19:38.826 [2024-04-26 14:24:20.289049] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.826 [2024-04-26 14:24:20.313245] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.826 14:24:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.826 14:24:20 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:38.826 14:24:20 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:38.826 14:24:20 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:38.826 14:24:20 -- host/digest.sh@80 -- # rw=randread 00:19:38.826 14:24:20 -- host/digest.sh@80 -- # bs=4096 00:19:38.826 14:24:20 -- host/digest.sh@80 -- # qd=128 00:19:38.826 14:24:20 -- host/digest.sh@80 -- # scan_dsa=false 00:19:38.826 14:24:20 -- host/digest.sh@83 -- # bperfpid=3205924 00:19:38.826 14:24:20 -- host/digest.sh@84 -- # waitforlisten 3205924 /var/tmp/bperf.sock 00:19:38.826 14:24:20 -- common/autotest_common.sh@817 -- # '[' -z 3205924 ']' 00:19:38.826 14:24:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:38.826 14:24:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:38.826 14:24:20 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:38.826 14:24:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:38.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:38.826 14:24:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:38.826 14:24:20 -- common/autotest_common.sh@10 -- # set +x 00:19:38.826 [2024-04-26 14:24:20.364618] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:19:38.826 [2024-04-26 14:24:20.364725] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205924 ] 00:19:38.826 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.084 [2024-04-26 14:24:20.424504] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.084 [2024-04-26 14:24:20.538861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.084 14:24:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:39.084 14:24:20 -- common/autotest_common.sh@850 -- # return 0 00:19:39.084 14:24:20 -- host/digest.sh@86 -- # false 00:19:39.084 14:24:20 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:39.084 14:24:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:39.652 14:24:20 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:39.652 14:24:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:40.218 nvme0n1 00:19:40.218 14:24:21 -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:40.218 14:24:21 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:40.218 Running I/O for 2 seconds... 00:19:42.161 00:19:42.161 Latency(us) 00:19:42.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.161 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:42.161 nvme0n1 : 2.01 16855.16 65.84 0.00 0.00 7584.25 3883.61 17670.45 00:19:42.161 =================================================================================================================== 00:19:42.162 Total : 16855.16 65.84 0.00 0.00 7584.25 3883.61 17670.45 00:19:42.162 0 00:19:42.162 14:24:23 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:42.162 14:24:23 -- host/digest.sh@93 -- # get_accel_stats 00:19:42.162 14:24:23 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:42.162 14:24:23 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:42.162 | select(.opcode=="crc32c") 00:19:42.162 | "\(.module_name) \(.executed)"' 00:19:42.162 14:24:23 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:42.421 14:24:23 -- host/digest.sh@94 -- # false 00:19:42.421 14:24:23 -- host/digest.sh@94 -- # exp_module=software 00:19:42.421 14:24:23 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:42.421 14:24:23 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:42.421 14:24:23 -- host/digest.sh@98 -- # killprocess 3205924 00:19:42.421 14:24:23 -- common/autotest_common.sh@936 -- # '[' -z 3205924 ']' 00:19:42.421 14:24:23 -- common/autotest_common.sh@940 -- # kill -0 3205924 00:19:42.421 14:24:23 -- common/autotest_common.sh@941 -- # uname 00:19:42.421 14:24:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:42.421 14:24:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3205924 00:19:42.421 14:24:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:42.421 14:24:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:42.421 14:24:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3205924' 00:19:42.421 killing process with pid 3205924 00:19:42.421 14:24:23 -- common/autotest_common.sh@955 -- # kill 3205924 00:19:42.421 Received shutdown signal, test time was about 2.000000 seconds 00:19:42.421 00:19:42.421 Latency(us) 00:19:42.421 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.421 =================================================================================================================== 00:19:42.421 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:42.421 14:24:23 -- common/autotest_common.sh@960 -- # wait 3205924 00:19:42.680 14:24:24 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:19:42.680 14:24:24 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:42.680 14:24:24 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:42.680 14:24:24 -- host/digest.sh@80 -- # rw=randread 00:19:42.680 14:24:24 -- host/digest.sh@80 -- # bs=131072 00:19:42.680 14:24:24 -- host/digest.sh@80 -- # qd=16 00:19:42.680 14:24:24 -- host/digest.sh@80 -- # scan_dsa=false 00:19:42.680 14:24:24 -- host/digest.sh@83 -- # bperfpid=3206246 00:19:42.680 14:24:24 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:42.680 14:24:24 -- host/digest.sh@84 -- # waitforlisten 3206246 /var/tmp/bperf.sock 00:19:42.680 14:24:24 -- common/autotest_common.sh@817 -- # '[' -z 3206246 ']' 00:19:42.680 14:24:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:42.680 14:24:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:42.680 14:24:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:42.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:42.680 14:24:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:42.680 14:24:24 -- common/autotest_common.sh@10 -- # set +x 00:19:42.680 [2024-04-26 14:24:24.230686] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:19:42.680 [2024-04-26 14:24:24.230787] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206246 ] 00:19:42.680 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:42.680 Zero copy mechanism will not be used. 00:19:42.939 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.939 [2024-04-26 14:24:24.291327] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.939 [2024-04-26 14:24:24.405772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.939 14:24:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:42.939 14:24:24 -- common/autotest_common.sh@850 -- # return 0 00:19:42.939 14:24:24 -- host/digest.sh@86 -- # false 00:19:42.939 14:24:24 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:42.939 14:24:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:43.506 14:24:24 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:43.506 14:24:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:43.763 nvme0n1 00:19:43.763 14:24:25 -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:43.763 14:24:25 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:43.763 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:43.763 Zero copy mechanism will not be used. 00:19:43.763 Running I/O for 2 seconds... 00:19:46.293 00:19:46.293 Latency(us) 00:19:46.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.293 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:46.293 nvme0n1 : 2.00 4839.43 604.93 0.00 0.00 3301.77 885.95 6893.42 00:19:46.293 =================================================================================================================== 00:19:46.293 Total : 4839.43 604.93 0.00 0.00 3301.77 885.95 6893.42 00:19:46.293 0 00:19:46.293 14:24:27 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:46.293 14:24:27 -- host/digest.sh@93 -- # get_accel_stats 00:19:46.293 14:24:27 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:46.293 14:24:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:46.293 14:24:27 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:46.293 | select(.opcode=="crc32c") 00:19:46.293 | "\(.module_name) \(.executed)"' 00:19:46.293 14:24:27 -- host/digest.sh@94 -- # false 00:19:46.293 14:24:27 -- host/digest.sh@94 -- # exp_module=software 00:19:46.293 14:24:27 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:46.293 14:24:27 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:46.293 14:24:27 -- host/digest.sh@98 -- # killprocess 3206246 00:19:46.293 14:24:27 -- common/autotest_common.sh@936 -- # '[' -z 3206246 ']' 00:19:46.293 14:24:27 -- common/autotest_common.sh@940 -- # kill -0 3206246 00:19:46.293 14:24:27 -- common/autotest_common.sh@941 -- # uname 00:19:46.293 14:24:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:46.293 14:24:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3206246 00:19:46.293 14:24:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:46.293 14:24:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:46.293 14:24:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3206246' 00:19:46.293 killing process with pid 3206246 00:19:46.293 14:24:27 -- common/autotest_common.sh@955 -- # kill 3206246 00:19:46.293 Received shutdown signal, test time was about 2.000000 seconds 00:19:46.293 00:19:46.293 Latency(us) 00:19:46.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.293 =================================================================================================================== 00:19:46.293 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:46.293 14:24:27 -- common/autotest_common.sh@960 -- # wait 3206246 00:19:46.552 14:24:27 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:46.552 14:24:27 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:46.552 14:24:27 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:46.552 14:24:27 -- host/digest.sh@80 -- # rw=randwrite 00:19:46.552 14:24:27 -- host/digest.sh@80 -- # bs=4096 00:19:46.552 14:24:27 -- host/digest.sh@80 -- # qd=128 00:19:46.552 14:24:27 -- host/digest.sh@80 -- # scan_dsa=false 00:19:46.552 14:24:27 -- host/digest.sh@83 -- # bperfpid=3206646 00:19:46.552 14:24:27 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:46.552 14:24:27 -- host/digest.sh@84 -- # waitforlisten 3206646 /var/tmp/bperf.sock 00:19:46.552 14:24:27 -- common/autotest_common.sh@817 -- # '[' -z 3206646 ']' 00:19:46.552 14:24:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:46.552 14:24:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:46.552 14:24:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:46.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:46.552 14:24:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:46.552 14:24:27 -- common/autotest_common.sh@10 -- # set +x 00:19:46.552 [2024-04-26 14:24:27.936542] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:19:46.552 [2024-04-26 14:24:27.936661] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206646 ] 00:19:46.552 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.552 [2024-04-26 14:24:27.997389] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.552 [2024-04-26 14:24:28.114779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.810 14:24:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:46.810 14:24:28 -- common/autotest_common.sh@850 -- # return 0 00:19:46.810 14:24:28 -- host/digest.sh@86 -- # false 00:19:46.810 14:24:28 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:46.810 14:24:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:47.068 14:24:28 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:47.068 14:24:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:47.634 nvme0n1 00:19:47.634 14:24:28 -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:47.634 14:24:28 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:47.634 Running I/O for 2 seconds... 00:19:49.532 00:19:49.532 Latency(us) 00:19:49.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.532 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:49.532 nvme0n1 : 2.01 17767.89 69.41 0.00 0.00 7186.20 3422.44 14369.37 00:19:49.532 =================================================================================================================== 00:19:49.532 Total : 17767.89 69.41 0.00 0.00 7186.20 3422.44 14369.37 00:19:49.532 0 00:19:49.532 14:24:31 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:49.532 14:24:31 -- host/digest.sh@93 -- # get_accel_stats 00:19:49.532 14:24:31 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:49.532 14:24:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:49.532 14:24:31 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:49.532 | select(.opcode=="crc32c") 00:19:49.532 | "\(.module_name) \(.executed)"' 00:19:49.789 14:24:31 -- host/digest.sh@94 -- # false 00:19:49.789 14:24:31 -- host/digest.sh@94 -- # exp_module=software 00:19:49.789 14:24:31 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:49.789 14:24:31 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:49.789 14:24:31 -- host/digest.sh@98 -- # killprocess 3206646 00:19:49.789 14:24:31 -- common/autotest_common.sh@936 -- # '[' -z 3206646 ']' 00:19:49.789 14:24:31 -- common/autotest_common.sh@940 -- # kill -0 3206646 00:19:50.046 14:24:31 -- common/autotest_common.sh@941 -- # uname 00:19:50.046 14:24:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:50.046 14:24:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3206646 00:19:50.046 14:24:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:50.046 14:24:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:50.046 14:24:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3206646' 00:19:50.046 killing process with pid 3206646 00:19:50.046 14:24:31 -- common/autotest_common.sh@955 -- # kill 3206646 00:19:50.046 Received shutdown signal, test time was about 2.000000 seconds 00:19:50.046 00:19:50.046 Latency(us) 00:19:50.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.046 =================================================================================================================== 00:19:50.046 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.046 14:24:31 -- common/autotest_common.sh@960 -- # wait 3206646 00:19:50.046 14:24:31 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:19:50.046 14:24:31 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:50.046 14:24:31 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:50.046 14:24:31 -- host/digest.sh@80 -- # rw=randwrite 00:19:50.046 14:24:31 -- host/digest.sh@80 -- # bs=131072 00:19:50.046 14:24:31 -- host/digest.sh@80 -- # qd=16 00:19:50.047 14:24:31 -- host/digest.sh@80 -- # scan_dsa=false 00:19:50.047 14:24:31 -- host/digest.sh@83 -- # bperfpid=3206958 00:19:50.047 14:24:31 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:50.047 14:24:31 -- host/digest.sh@84 -- # waitforlisten 3206958 /var/tmp/bperf.sock 00:19:50.047 14:24:31 -- common/autotest_common.sh@817 -- # '[' -z 3206958 ']' 00:19:50.047 14:24:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:50.047 14:24:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:50.047 14:24:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:50.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:50.047 14:24:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:50.047 14:24:31 -- common/autotest_common.sh@10 -- # set +x 00:19:50.303 [2024-04-26 14:24:31.647906] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:19:50.303 [2024-04-26 14:24:31.648008] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206958 ] 00:19:50.303 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:50.303 Zero copy mechanism will not be used. 00:19:50.303 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.303 [2024-04-26 14:24:31.708898] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.303 [2024-04-26 14:24:31.823479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.560 14:24:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:50.560 14:24:31 -- common/autotest_common.sh@850 -- # return 0 00:19:50.560 14:24:31 -- host/digest.sh@86 -- # false 00:19:50.560 14:24:31 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:50.561 14:24:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:50.819 14:24:32 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:50.819 14:24:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:51.384 nvme0n1 00:19:51.384 14:24:32 -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:51.384 14:24:32 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:51.384 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:51.384 Zero copy mechanism will not be used. 00:19:51.384 Running I/O for 2 seconds... 00:19:53.912 00:19:53.912 Latency(us) 00:19:53.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.912 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:53.912 nvme0n1 : 2.00 5299.61 662.45 0.00 0.00 3011.25 2172.40 12718.84 00:19:53.912 =================================================================================================================== 00:19:53.912 Total : 5299.61 662.45 0.00 0.00 3011.25 2172.40 12718.84 00:19:53.912 0 00:19:53.912 14:24:34 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:53.912 14:24:34 -- host/digest.sh@93 -- # get_accel_stats 00:19:53.912 14:24:34 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:53.912 14:24:34 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:53.912 | select(.opcode=="crc32c") 00:19:53.912 | "\(.module_name) \(.executed)"' 00:19:53.912 14:24:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:53.912 14:24:35 -- host/digest.sh@94 -- # false 00:19:53.912 14:24:35 -- host/digest.sh@94 -- # exp_module=software 00:19:53.912 14:24:35 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:53.912 14:24:35 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:53.913 14:24:35 -- host/digest.sh@98 -- # killprocess 3206958 00:19:53.913 14:24:35 -- common/autotest_common.sh@936 -- # '[' -z 3206958 ']' 00:19:53.913 14:24:35 -- common/autotest_common.sh@940 -- # kill -0 3206958 00:19:53.913 14:24:35 -- common/autotest_common.sh@941 -- # uname 00:19:53.913 14:24:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:53.913 14:24:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3206958 00:19:53.913 14:24:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:53.913 14:24:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:53.913 14:24:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3206958' 00:19:53.913 killing process with pid 3206958 00:19:53.913 14:24:35 -- common/autotest_common.sh@955 -- # kill 3206958 00:19:53.913 Received shutdown signal, test time was about 2.000000 seconds 00:19:53.913 00:19:53.913 Latency(us) 00:19:53.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.913 =================================================================================================================== 00:19:53.913 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:53.913 14:24:35 -- common/autotest_common.sh@960 -- # wait 3206958 00:19:53.913 14:24:35 -- host/digest.sh@132 -- # killprocess 3205817 00:19:53.913 14:24:35 -- common/autotest_common.sh@936 -- # '[' -z 3205817 ']' 00:19:53.913 14:24:35 -- common/autotest_common.sh@940 -- # kill -0 3205817 00:19:53.913 14:24:35 -- common/autotest_common.sh@941 -- # uname 00:19:53.913 14:24:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:53.913 14:24:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3205817 00:19:54.172 14:24:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:54.172 14:24:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:54.172 14:24:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3205817' 00:19:54.172 killing process with pid 3205817 00:19:54.172 14:24:35 -- common/autotest_common.sh@955 -- # kill 3205817 00:19:54.172 14:24:35 -- common/autotest_common.sh@960 -- # wait 3205817 00:19:54.172 00:19:54.172 real 0m15.846s 00:19:54.172 user 0m31.910s 00:19:54.172 sys 0m4.076s 00:19:54.172 14:24:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:54.172 14:24:35 -- common/autotest_common.sh@10 -- # set +x 00:19:54.172 ************************************ 00:19:54.172 END TEST nvmf_digest_clean 00:19:54.172 ************************************ 00:19:54.172 14:24:35 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:19:54.172 14:24:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:54.172 14:24:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:54.172 14:24:35 -- common/autotest_common.sh@10 -- # set +x 00:19:54.432 ************************************ 00:19:54.432 START TEST nvmf_digest_error 00:19:54.432 ************************************ 00:19:54.432 14:24:35 -- common/autotest_common.sh@1111 -- # run_digest_error 00:19:54.432 14:24:35 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:19:54.432 14:24:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:54.432 14:24:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:54.432 14:24:35 -- common/autotest_common.sh@10 -- # set +x 00:19:54.432 14:24:35 -- nvmf/common.sh@470 -- # nvmfpid=3207398 00:19:54.432 14:24:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:54.432 14:24:35 -- nvmf/common.sh@471 -- # waitforlisten 3207398 00:19:54.432 14:24:35 -- common/autotest_common.sh@817 -- # '[' -z 3207398 ']' 00:19:54.432 14:24:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.432 14:24:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:54.432 14:24:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.432 14:24:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:54.432 14:24:35 -- common/autotest_common.sh@10 -- # set +x 00:19:54.432 [2024-04-26 14:24:35.904781] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:19:54.432 [2024-04-26 14:24:35.904881] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.432 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.432 [2024-04-26 14:24:35.970187] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.691 [2024-04-26 14:24:36.087324] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.691 [2024-04-26 14:24:36.087388] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.691 [2024-04-26 14:24:36.087405] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:54.691 [2024-04-26 14:24:36.087418] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:54.691 [2024-04-26 14:24:36.087430] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.691 [2024-04-26 14:24:36.087470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.691 14:24:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:54.691 14:24:36 -- common/autotest_common.sh@850 -- # return 0 00:19:54.691 14:24:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:54.691 14:24:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:54.691 14:24:36 -- common/autotest_common.sh@10 -- # set +x 00:19:54.691 14:24:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.691 14:24:36 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:19:54.691 14:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.691 14:24:36 -- common/autotest_common.sh@10 -- # set +x 00:19:54.691 [2024-04-26 14:24:36.176123] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:19:54.691 14:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.691 14:24:36 -- host/digest.sh@105 -- # common_target_config 00:19:54.691 14:24:36 -- host/digest.sh@43 -- # rpc_cmd 00:19:54.691 14:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:54.691 14:24:36 -- common/autotest_common.sh@10 -- # set +x 00:19:54.950 null0 00:19:54.950 [2024-04-26 14:24:36.283942] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.950 [2024-04-26 14:24:36.308156] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.950 14:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.950 14:24:36 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:19:54.950 14:24:36 -- host/digest.sh@54 -- # local rw bs qd 00:19:54.950 14:24:36 -- host/digest.sh@56 -- # rw=randread 00:19:54.950 14:24:36 -- host/digest.sh@56 -- # bs=4096 00:19:54.950 14:24:36 -- host/digest.sh@56 -- # qd=128 00:19:54.950 14:24:36 -- host/digest.sh@58 -- # bperfpid=3207418 00:19:54.950 14:24:36 -- host/digest.sh@60 -- # waitforlisten 3207418 /var/tmp/bperf.sock 00:19:54.950 14:24:36 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:19:54.950 14:24:36 -- common/autotest_common.sh@817 -- # '[' -z 3207418 ']' 00:19:54.950 14:24:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:54.950 14:24:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:54.950 14:24:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:54.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:54.950 14:24:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:54.950 14:24:36 -- common/autotest_common.sh@10 -- # set +x 00:19:54.950 [2024-04-26 14:24:36.358082] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:19:54.950 [2024-04-26 14:24:36.358180] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207418 ] 00:19:54.950 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.950 [2024-04-26 14:24:36.420173] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.209 [2024-04-26 14:24:36.535115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.209 14:24:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:55.209 14:24:36 -- common/autotest_common.sh@850 -- # return 0 00:19:55.209 14:24:36 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:55.209 14:24:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:55.468 14:24:36 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:55.468 14:24:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.468 14:24:36 -- common/autotest_common.sh@10 -- # set +x 00:19:55.468 14:24:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.468 14:24:36 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:55.468 14:24:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:55.726 nvme0n1 00:19:55.726 14:24:37 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:55.726 14:24:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.726 14:24:37 -- common/autotest_common.sh@10 -- # set +x 00:19:55.726 14:24:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.726 14:24:37 -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:55.726 14:24:37 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:55.985 Running I/O for 2 seconds... 00:19:55.985 [2024-04-26 14:24:37.429071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:55.985 [2024-04-26 14:24:37.429123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.985 [2024-04-26 14:24:37.429145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.985 [2024-04-26 14:24:37.446972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:55.985 [2024-04-26 14:24:37.447009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.985 [2024-04-26 14:24:37.447029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.985 [2024-04-26 14:24:37.460276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:55.985 [2024-04-26 14:24:37.460309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.985 [2024-04-26 14:24:37.460328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.985 [2024-04-26 14:24:37.479962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:55.985 [2024-04-26 14:24:37.479997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.985 [2024-04-26 14:24:37.480016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.985 [2024-04-26 14:24:37.493221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:55.985 [2024-04-26 14:24:37.493255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.985 [2024-04-26 14:24:37.493282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.985 [2024-04-26 14:24:37.507513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:55.985 [2024-04-26 14:24:37.507546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.985 [2024-04-26 14:24:37.507565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.985 [2024-04-26 14:24:37.522392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:55.985 [2024-04-26 14:24:37.522425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.985 [2024-04-26 14:24:37.522444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:55.985 [2024-04-26 14:24:37.538204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:55.985 [2024-04-26 14:24:37.538237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:55.985 [2024-04-26 14:24:37.538257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.244 [2024-04-26 14:24:37.554731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.244 [2024-04-26 14:24:37.554765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.244 [2024-04-26 14:24:37.554784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.244 [2024-04-26 14:24:37.568230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.244 [2024-04-26 14:24:37.568267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.244 [2024-04-26 14:24:37.568286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.244 [2024-04-26 14:24:37.585560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.244 [2024-04-26 14:24:37.585593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.244 [2024-04-26 14:24:37.585612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.244 [2024-04-26 14:24:37.599110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.244 [2024-04-26 14:24:37.599144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.244 [2024-04-26 14:24:37.599164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.244 [2024-04-26 14:24:37.616427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.244 [2024-04-26 14:24:37.616459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.244 [2024-04-26 14:24:37.616478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.244 [2024-04-26 14:24:37.633048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.244 [2024-04-26 14:24:37.633087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.244 [2024-04-26 14:24:37.633107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.244 [2024-04-26 14:24:37.645944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.244 [2024-04-26 14:24:37.645977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.244 [2024-04-26 14:24:37.645995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.244 [2024-04-26 14:24:37.663523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.244 [2024-04-26 14:24:37.663558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.244 [2024-04-26 14:24:37.663577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.244 [2024-04-26 14:24:37.677313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.244 [2024-04-26 14:24:37.677353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.244 [2024-04-26 14:24:37.677372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.244 [2024-04-26 14:24:37.694717] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.244 [2024-04-26 14:24:37.694757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.244 [2024-04-26 14:24:37.694775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.244 [2024-04-26 14:24:37.710269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.244 [2024-04-26 14:24:37.710302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.244 [2024-04-26 14:24:37.710321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.244 [2024-04-26 14:24:37.725446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.244 [2024-04-26 14:24:37.725480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.244 [2024-04-26 14:24:37.725499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.244 [2024-04-26 14:24:37.740542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.244 [2024-04-26 14:24:37.740576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.244 [2024-04-26 14:24:37.740595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.244 [2024-04-26 14:24:37.754620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.244 [2024-04-26 14:24:37.754661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.244 [2024-04-26 14:24:37.754695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.244 [2024-04-26 14:24:37.769123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.244 [2024-04-26 14:24:37.769155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.244 [2024-04-26 14:24:37.769175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.244 [2024-04-26 14:24:37.783709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.244 [2024-04-26 14:24:37.783740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.244 [2024-04-26 14:24:37.783759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.244 [2024-04-26 14:24:37.798933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.244 [2024-04-26 14:24:37.798967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.244 [2024-04-26 14:24:37.798986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.503 [2024-04-26 14:24:37.813345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.503 [2024-04-26 14:24:37.813380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.503 [2024-04-26 14:24:37.813399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.503 [2024-04-26 14:24:37.828560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.503 [2024-04-26 14:24:37.828593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.503 [2024-04-26 14:24:37.828611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.503 [2024-04-26 14:24:37.845454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.503 [2024-04-26 14:24:37.845496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.503 [2024-04-26 14:24:37.845515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.503 [2024-04-26 14:24:37.860575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.503 [2024-04-26 14:24:37.860608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.503 [2024-04-26 14:24:37.860627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.503 [2024-04-26 14:24:37.875921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.503 [2024-04-26 14:24:37.875953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.503 [2024-04-26 14:24:37.875972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.503 [2024-04-26 14:24:37.891020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.503 [2024-04-26 14:24:37.891058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.503 [2024-04-26 14:24:37.891077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.503 [2024-04-26 14:24:37.906097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.503 [2024-04-26 14:24:37.906129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.503 [2024-04-26 14:24:37.906148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.503 [2024-04-26 14:24:37.919291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.503 [2024-04-26 14:24:37.919324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.503 [2024-04-26 14:24:37.919342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.503 [2024-04-26 14:24:37.937215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.504 [2024-04-26 14:24:37.937247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.504 [2024-04-26 14:24:37.937266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.504 [2024-04-26 14:24:37.949703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.504 [2024-04-26 14:24:37.949736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.504 [2024-04-26 14:24:37.949754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.504 [2024-04-26 14:24:37.967660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.504 [2024-04-26 14:24:37.967705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.504 [2024-04-26 14:24:37.967724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.504 [2024-04-26 14:24:37.983765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.504 [2024-04-26 14:24:37.983800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.504 [2024-04-26 14:24:37.983818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.504 [2024-04-26 14:24:37.997085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.504 [2024-04-26 14:24:37.997119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.504 [2024-04-26 14:24:37.997138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.504 [2024-04-26 14:24:38.014581] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.504 [2024-04-26 14:24:38.014616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.504 [2024-04-26 14:24:38.014645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.504 [2024-04-26 14:24:38.028130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.504 [2024-04-26 14:24:38.028162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.504 [2024-04-26 14:24:38.028181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.504 [2024-04-26 14:24:38.044939] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.504 [2024-04-26 14:24:38.044974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.504 [2024-04-26 14:24:38.044993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.504 [2024-04-26 14:24:38.062125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.504 [2024-04-26 14:24:38.062159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.504 [2024-04-26 14:24:38.062178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.763 [2024-04-26 14:24:38.075171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.763 [2024-04-26 14:24:38.075219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.763 [2024-04-26 14:24:38.075242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.763 [2024-04-26 14:24:38.094530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.763 [2024-04-26 14:24:38.094566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.763 [2024-04-26 14:24:38.094585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.763 [2024-04-26 14:24:38.110263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.763 [2024-04-26 14:24:38.110296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.763 [2024-04-26 14:24:38.110315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.763 [2024-04-26 14:24:38.122980] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.763 [2024-04-26 14:24:38.123013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.763 [2024-04-26 14:24:38.123031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.763 [2024-04-26 14:24:38.138663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.763 [2024-04-26 14:24:38.138697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.763 [2024-04-26 14:24:38.138716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.763 [2024-04-26 14:24:38.153934] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.763 [2024-04-26 14:24:38.153966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.763 [2024-04-26 14:24:38.153994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.763 [2024-04-26 14:24:38.168486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.763 [2024-04-26 14:24:38.168519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.763 [2024-04-26 14:24:38.168537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.763 [2024-04-26 14:24:38.183164] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.763 [2024-04-26 14:24:38.183197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.763 [2024-04-26 14:24:38.183217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.763 [2024-04-26 14:24:38.195954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.763 [2024-04-26 14:24:38.195986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.763 [2024-04-26 14:24:38.196004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.763 [2024-04-26 14:24:38.213214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.763 [2024-04-26 14:24:38.213255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.763 [2024-04-26 14:24:38.213273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.763 [2024-04-26 14:24:38.229547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.763 [2024-04-26 14:24:38.229581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.763 [2024-04-26 14:24:38.229600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.763 [2024-04-26 14:24:38.243839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.763 [2024-04-26 14:24:38.243872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.763 [2024-04-26 14:24:38.243892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.763 [2024-04-26 14:24:38.260621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.763 [2024-04-26 14:24:38.260660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.763 [2024-04-26 14:24:38.260679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.763 [2024-04-26 14:24:38.279924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.763 [2024-04-26 14:24:38.279960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.763 [2024-04-26 14:24:38.279978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.763 [2024-04-26 14:24:38.294900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.763 [2024-04-26 14:24:38.294942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.763 [2024-04-26 14:24:38.294961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.763 [2024-04-26 14:24:38.307620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.763 [2024-04-26 14:24:38.307659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.763 [2024-04-26 14:24:38.307678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:56.763 [2024-04-26 14:24:38.323790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:56.763 [2024-04-26 14:24:38.323823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.763 [2024-04-26 14:24:38.323841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.022 [2024-04-26 14:24:38.337037] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.022 [2024-04-26 14:24:38.337072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.022 [2024-04-26 14:24:38.337091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.022 [2024-04-26 14:24:38.351617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.022 [2024-04-26 14:24:38.351660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.022 [2024-04-26 14:24:38.351679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.022 [2024-04-26 14:24:38.366048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.022 [2024-04-26 14:24:38.366081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.022 [2024-04-26 14:24:38.366099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.022 [2024-04-26 14:24:38.382492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.022 [2024-04-26 14:24:38.382526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.022 [2024-04-26 14:24:38.382545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.022 [2024-04-26 14:24:38.399837] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.022 [2024-04-26 14:24:38.399871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.022 [2024-04-26 14:24:38.399889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.022 [2024-04-26 14:24:38.414172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.022 [2024-04-26 14:24:38.414205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.022 [2024-04-26 14:24:38.414224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.022 [2024-04-26 14:24:38.427387] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.022 [2024-04-26 14:24:38.427418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.022 [2024-04-26 14:24:38.427437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.022 [2024-04-26 14:24:38.444023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.022 [2024-04-26 14:24:38.444057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.022 [2024-04-26 14:24:38.444075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.022 [2024-04-26 14:24:38.460436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.022 [2024-04-26 14:24:38.460469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.022 [2024-04-26 14:24:38.460487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.022 [2024-04-26 14:24:38.473763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.022 [2024-04-26 14:24:38.473797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.022 [2024-04-26 14:24:38.473816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.022 [2024-04-26 14:24:38.491344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.022 [2024-04-26 14:24:38.491377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.022 [2024-04-26 14:24:38.491395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.023 [2024-04-26 14:24:38.504096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.023 [2024-04-26 14:24:38.504129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.023 [2024-04-26 14:24:38.504148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.023 [2024-04-26 14:24:38.521593] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.023 [2024-04-26 14:24:38.521625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.023 [2024-04-26 14:24:38.521652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.023 [2024-04-26 14:24:38.538004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.023 [2024-04-26 14:24:38.538043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.023 [2024-04-26 14:24:38.538062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.023 [2024-04-26 14:24:38.553087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.023 [2024-04-26 14:24:38.553128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.023 [2024-04-26 14:24:38.553148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.023 [2024-04-26 14:24:38.568266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.023 [2024-04-26 14:24:38.568298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.023 [2024-04-26 14:24:38.568317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.023 [2024-04-26 14:24:38.580282] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.023 [2024-04-26 14:24:38.580314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.023 [2024-04-26 14:24:38.580332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.282 [2024-04-26 14:24:38.598541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.282 [2024-04-26 14:24:38.598576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.282 [2024-04-26 14:24:38.598595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.282 [2024-04-26 14:24:38.613583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.282 [2024-04-26 14:24:38.613616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.282 [2024-04-26 14:24:38.613642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.282 [2024-04-26 14:24:38.628607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.282 [2024-04-26 14:24:38.628649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.282 [2024-04-26 14:24:38.628669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.282 [2024-04-26 14:24:38.643473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.282 [2024-04-26 14:24:38.643509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.282 [2024-04-26 14:24:38.643528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.282 [2024-04-26 14:24:38.657838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.282 [2024-04-26 14:24:38.657870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.282 [2024-04-26 14:24:38.657888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.282 [2024-04-26 14:24:38.670850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.282 [2024-04-26 14:24:38.670881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.282 [2024-04-26 14:24:38.670900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.282 [2024-04-26 14:24:38.687445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.282 [2024-04-26 14:24:38.687478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.282 [2024-04-26 14:24:38.687496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.282 [2024-04-26 14:24:38.702890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.282 [2024-04-26 14:24:38.702923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.282 [2024-04-26 14:24:38.702941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.282 [2024-04-26 14:24:38.717100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.282 [2024-04-26 14:24:38.717132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.282 [2024-04-26 14:24:38.717151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.282 [2024-04-26 14:24:38.731601] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.282 [2024-04-26 14:24:38.731642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.282 [2024-04-26 14:24:38.731663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.282 [2024-04-26 14:24:38.746882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.282 [2024-04-26 14:24:38.746915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.282 [2024-04-26 14:24:38.746934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.282 [2024-04-26 14:24:38.761094] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.282 [2024-04-26 14:24:38.761130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.282 [2024-04-26 14:24:38.761149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.282 [2024-04-26 14:24:38.776280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.282 [2024-04-26 14:24:38.776320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.282 [2024-04-26 14:24:38.776339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.282 [2024-04-26 14:24:38.790833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.282 [2024-04-26 14:24:38.790865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.282 [2024-04-26 14:24:38.790884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.282 [2024-04-26 14:24:38.804436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.282 [2024-04-26 14:24:38.804469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.282 [2024-04-26 14:24:38.804493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.282 [2024-04-26 14:24:38.819166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.282 [2024-04-26 14:24:38.819197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.282 [2024-04-26 14:24:38.819216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.282 [2024-04-26 14:24:38.836728] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.282 [2024-04-26 14:24:38.836761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.282 [2024-04-26 14:24:38.836781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.541 [2024-04-26 14:24:38.851243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.541 [2024-04-26 14:24:38.851277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.541 [2024-04-26 14:24:38.851297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.541 [2024-04-26 14:24:38.864482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.541 [2024-04-26 14:24:38.864516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.541 [2024-04-26 14:24:38.864536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.541 [2024-04-26 14:24:38.881230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.541 [2024-04-26 14:24:38.881265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.541 [2024-04-26 14:24:38.881284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.541 [2024-04-26 14:24:38.894582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.541 [2024-04-26 14:24:38.894614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.541 [2024-04-26 14:24:38.894640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.541 [2024-04-26 14:24:38.910275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.541 [2024-04-26 14:24:38.910306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.541 [2024-04-26 14:24:38.910325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.541 [2024-04-26 14:24:38.927841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.541 [2024-04-26 14:24:38.927873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.541 [2024-04-26 14:24:38.927892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.541 [2024-04-26 14:24:38.942466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.541 [2024-04-26 14:24:38.942505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.541 [2024-04-26 14:24:38.942524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.541 [2024-04-26 14:24:38.957045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.541 [2024-04-26 14:24:38.957077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.541 [2024-04-26 14:24:38.957095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.541 [2024-04-26 14:24:38.971521] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.541 [2024-04-26 14:24:38.971553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.541 [2024-04-26 14:24:38.971571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.541 [2024-04-26 14:24:38.985611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.541 [2024-04-26 14:24:38.985651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.541 [2024-04-26 14:24:38.985672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.541 [2024-04-26 14:24:39.003083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.541 [2024-04-26 14:24:39.003122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.541 [2024-04-26 14:24:39.003141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.541 [2024-04-26 14:24:39.014961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.541 [2024-04-26 14:24:39.014992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.541 [2024-04-26 14:24:39.015010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.541 [2024-04-26 14:24:39.032058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.541 [2024-04-26 14:24:39.032090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.541 [2024-04-26 14:24:39.032108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.541 [2024-04-26 14:24:39.046779] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.541 [2024-04-26 14:24:39.046811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.541 [2024-04-26 14:24:39.046830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.541 [2024-04-26 14:24:39.061314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.541 [2024-04-26 14:24:39.061346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.541 [2024-04-26 14:24:39.061365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.541 [2024-04-26 14:24:39.075741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.541 [2024-04-26 14:24:39.075773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.541 [2024-04-26 14:24:39.075791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.541 [2024-04-26 14:24:39.090858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.541 [2024-04-26 14:24:39.090890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.542 [2024-04-26 14:24:39.090908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.542 [2024-04-26 14:24:39.105040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.542 [2024-04-26 14:24:39.105071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.542 [2024-04-26 14:24:39.105090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.800 [2024-04-26 14:24:39.122440] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.800 [2024-04-26 14:24:39.122472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.800 [2024-04-26 14:24:39.122491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.800 [2024-04-26 14:24:39.137429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.800 [2024-04-26 14:24:39.137461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.800 [2024-04-26 14:24:39.137479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.800 [2024-04-26 14:24:39.152670] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.800 [2024-04-26 14:24:39.152702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.800 [2024-04-26 14:24:39.152721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.800 [2024-04-26 14:24:39.165776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.800 [2024-04-26 14:24:39.165808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.800 [2024-04-26 14:24:39.165827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.800 [2024-04-26 14:24:39.183258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.800 [2024-04-26 14:24:39.183289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.800 [2024-04-26 14:24:39.183308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.800 [2024-04-26 14:24:39.197259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.800 [2024-04-26 14:24:39.197300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.800 [2024-04-26 14:24:39.197319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.800 [2024-04-26 14:24:39.214487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.800 [2024-04-26 14:24:39.214518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.800 [2024-04-26 14:24:39.214537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.800 [2024-04-26 14:24:39.227827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.800 [2024-04-26 14:24:39.227858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.800 [2024-04-26 14:24:39.227877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.800 [2024-04-26 14:24:39.243256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.800 [2024-04-26 14:24:39.243288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.801 [2024-04-26 14:24:39.243306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.801 [2024-04-26 14:24:39.257266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.801 [2024-04-26 14:24:39.257304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.801 [2024-04-26 14:24:39.257322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.801 [2024-04-26 14:24:39.272478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.801 [2024-04-26 14:24:39.272510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.801 [2024-04-26 14:24:39.272529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.801 [2024-04-26 14:24:39.288629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.801 [2024-04-26 14:24:39.288666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.801 [2024-04-26 14:24:39.288685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.801 [2024-04-26 14:24:39.301910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.801 [2024-04-26 14:24:39.301940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.801 [2024-04-26 14:24:39.301958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.801 [2024-04-26 14:24:39.317244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.801 [2024-04-26 14:24:39.317276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.801 [2024-04-26 14:24:39.317295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.801 [2024-04-26 14:24:39.331874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.801 [2024-04-26 14:24:39.331905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.801 [2024-04-26 14:24:39.331923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.801 [2024-04-26 14:24:39.346529] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.801 [2024-04-26 14:24:39.346566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.801 [2024-04-26 14:24:39.346584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:57.801 [2024-04-26 14:24:39.361055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:57.801 [2024-04-26 14:24:39.361086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.801 [2024-04-26 14:24:39.361105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.059 [2024-04-26 14:24:39.375910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:58.059 [2024-04-26 14:24:39.375942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.060 [2024-04-26 14:24:39.375960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.060 [2024-04-26 14:24:39.391137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:58.060 [2024-04-26 14:24:39.391169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.060 [2024-04-26 14:24:39.391187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.060 [2024-04-26 14:24:39.405359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:58.060 [2024-04-26 14:24:39.405390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.060 [2024-04-26 14:24:39.405410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.060 [2024-04-26 14:24:39.420493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17a1570) 00:19:58.060 [2024-04-26 14:24:39.420524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:58.060 [2024-04-26 14:24:39.420543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:58.060 00:19:58.060 Latency(us) 00:19:58.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.060 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:58.060 nvme0n1 : 2.01 16758.01 65.46 0.00 0.00 7625.06 4077.80 22039.51 00:19:58.060 =================================================================================================================== 00:19:58.060 Total : 16758.01 65.46 0.00 0.00 7625.06 4077.80 22039.51 00:19:58.060 0 00:19:58.060 14:24:39 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:58.060 14:24:39 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:58.060 14:24:39 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:58.060 | .driver_specific 00:19:58.060 | .nvme_error 00:19:58.060 | .status_code 00:19:58.060 | .command_transient_transport_error' 00:19:58.060 14:24:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:58.319 14:24:39 -- host/digest.sh@71 -- # (( 132 > 0 )) 00:19:58.319 14:24:39 -- host/digest.sh@73 -- # killprocess 3207418 00:19:58.319 14:24:39 -- common/autotest_common.sh@936 -- # '[' -z 3207418 ']' 00:19:58.319 14:24:39 -- common/autotest_common.sh@940 -- # kill -0 3207418 00:19:58.319 14:24:39 -- common/autotest_common.sh@941 -- # uname 00:19:58.319 14:24:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:58.319 14:24:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3207418 00:19:58.319 14:24:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:58.319 14:24:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:58.319 14:24:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3207418' 00:19:58.319 killing process with pid 3207418 00:19:58.319 14:24:39 -- common/autotest_common.sh@955 -- # kill 3207418 00:19:58.319 Received shutdown signal, test time was about 2.000000 seconds 00:19:58.319 00:19:58.319 Latency(us) 00:19:58.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.319 =================================================================================================================== 00:19:58.319 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:58.319 14:24:39 -- common/autotest_common.sh@960 -- # wait 3207418 00:19:58.577 14:24:39 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:19:58.577 14:24:39 -- host/digest.sh@54 -- # local rw bs qd 00:19:58.577 14:24:39 -- host/digest.sh@56 -- # rw=randread 00:19:58.577 14:24:39 -- host/digest.sh@56 -- # bs=131072 00:19:58.577 14:24:39 -- host/digest.sh@56 -- # qd=16 00:19:58.577 14:24:39 -- host/digest.sh@58 -- # bperfpid=3207774 00:19:58.577 14:24:39 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:19:58.577 14:24:39 -- host/digest.sh@60 -- # waitforlisten 3207774 /var/tmp/bperf.sock 00:19:58.577 14:24:39 -- common/autotest_common.sh@817 -- # '[' -z 3207774 ']' 00:19:58.577 14:24:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:58.577 14:24:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:58.577 14:24:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:58.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:58.578 14:24:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:58.578 14:24:39 -- common/autotest_common.sh@10 -- # set +x 00:19:58.578 [2024-04-26 14:24:40.010193] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:19:58.578 [2024-04-26 14:24:40.010297] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207774 ] 00:19:58.578 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:58.578 Zero copy mechanism will not be used. 00:19:58.578 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.578 [2024-04-26 14:24:40.071882] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.835 [2024-04-26 14:24:40.186549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.835 14:24:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:58.836 14:24:40 -- common/autotest_common.sh@850 -- # return 0 00:19:58.836 14:24:40 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:58.836 14:24:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:59.094 14:24:40 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:59.094 14:24:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.094 14:24:40 -- common/autotest_common.sh@10 -- # set +x 00:19:59.094 14:24:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.094 14:24:40 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:59.094 14:24:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:59.661 nvme0n1 00:19:59.661 14:24:41 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:59.661 14:24:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:59.661 14:24:41 -- common/autotest_common.sh@10 -- # set +x 00:19:59.661 14:24:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:59.661 14:24:41 -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:59.661 14:24:41 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:59.661 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:59.661 Zero copy mechanism will not be used. 00:19:59.661 Running I/O for 2 seconds... 00:19:59.661 [2024-04-26 14:24:41.162151] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.661 [2024-04-26 14:24:41.162212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.661 [2024-04-26 14:24:41.162234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.661 [2024-04-26 14:24:41.170509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.661 [2024-04-26 14:24:41.170555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.661 [2024-04-26 14:24:41.170574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.661 [2024-04-26 14:24:41.178610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.661 [2024-04-26 14:24:41.178655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.661 [2024-04-26 14:24:41.178675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.661 [2024-04-26 14:24:41.186556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.661 [2024-04-26 14:24:41.186589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.661 [2024-04-26 14:24:41.186607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.661 [2024-04-26 14:24:41.194491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.661 [2024-04-26 14:24:41.194526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.661 [2024-04-26 14:24:41.194544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.661 [2024-04-26 14:24:41.202524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.661 [2024-04-26 14:24:41.202558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.661 [2024-04-26 14:24:41.202576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.661 [2024-04-26 14:24:41.210556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.661 [2024-04-26 14:24:41.210609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.661 [2024-04-26 14:24:41.210628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.661 [2024-04-26 14:24:41.218509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.661 [2024-04-26 14:24:41.218542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.661 [2024-04-26 14:24:41.218560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.661 [2024-04-26 14:24:41.226345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.661 [2024-04-26 14:24:41.226379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.661 [2024-04-26 14:24:41.226397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.920 [2024-04-26 14:24:41.234206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.920 [2024-04-26 14:24:41.234241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.920 [2024-04-26 14:24:41.234260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.920 [2024-04-26 14:24:41.242100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.920 [2024-04-26 14:24:41.242134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.920 [2024-04-26 14:24:41.242152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.920 [2024-04-26 14:24:41.249927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.920 [2024-04-26 14:24:41.249960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.249978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.257832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.257865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.257883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.265684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.265717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.265734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.273492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.273524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.273542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.281335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.281368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.281385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.289173] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.289205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.289222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.296941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.296973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.296991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.304813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.304846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.304864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.312683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.312715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.312733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.320430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.320462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.320480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.328280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.328311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.328329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.336107] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.336140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.336157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.343945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.343978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.344005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.351775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.351808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.351825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.359583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.359614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.359638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.367403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.367436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.367453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.375343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.375374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.375392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.383222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.383253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.383270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.391022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.391054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.391072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.398995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.399028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.399045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.406958] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.406989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.407007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.414766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.414799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.414816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.422830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.422863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.422880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.430680] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.430712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.430730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.438522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.438554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.438572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.446368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.446401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.921 [2024-04-26 14:24:41.446418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.921 [2024-04-26 14:24:41.454222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.921 [2024-04-26 14:24:41.454255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.922 [2024-04-26 14:24:41.454273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:59.922 [2024-04-26 14:24:41.462645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.922 [2024-04-26 14:24:41.462678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.922 [2024-04-26 14:24:41.462697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:59.922 [2024-04-26 14:24:41.471431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.922 [2024-04-26 14:24:41.471465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.922 [2024-04-26 14:24:41.471483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:59.922 [2024-04-26 14:24:41.479930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.922 [2024-04-26 14:24:41.479964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.922 [2024-04-26 14:24:41.479987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:59.922 [2024-04-26 14:24:41.488041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:19:59.922 [2024-04-26 14:24:41.488075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.922 [2024-04-26 14:24:41.488094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.496326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.496361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.496381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.504174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.504208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.504226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.512029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.512062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.512080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.519935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.519967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.519985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.527822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.527854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.527872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.535576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.535609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.535627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.543362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.543395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.543413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.551186] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.551223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.551241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.559079] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.559111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.559129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.566939] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.566972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.566990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.574794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.574826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.574844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.582595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.582636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.582656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.590499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.590531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.590549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.598345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.598376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.598394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.606156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.606187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.606205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.613962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.613994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.614012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.621893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.621925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.621943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.629784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.629816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.629834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.637587] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.637620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.637646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.645412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.645444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.645462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.653352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.653385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.653403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.661279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.661310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.661328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.669059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.669090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.669108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.677120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.677151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.677169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.684977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.685009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.685033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.692813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.179 [2024-04-26 14:24:41.692845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.179 [2024-04-26 14:24:41.692863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.179 [2024-04-26 14:24:41.700653] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.180 [2024-04-26 14:24:41.700685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.180 [2024-04-26 14:24:41.700702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.180 [2024-04-26 14:24:41.708426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.180 [2024-04-26 14:24:41.708457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.180 [2024-04-26 14:24:41.708475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.180 [2024-04-26 14:24:41.716299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.180 [2024-04-26 14:24:41.716331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.180 [2024-04-26 14:24:41.716349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.180 [2024-04-26 14:24:41.724160] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.180 [2024-04-26 14:24:41.724192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.180 [2024-04-26 14:24:41.724210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.180 [2024-04-26 14:24:41.732203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.180 [2024-04-26 14:24:41.732238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.180 [2024-04-26 14:24:41.732256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.180 [2024-04-26 14:24:41.740875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.180 [2024-04-26 14:24:41.740910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.180 [2024-04-26 14:24:41.740929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.463 [2024-04-26 14:24:41.749536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.463 [2024-04-26 14:24:41.749572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.463 [2024-04-26 14:24:41.749590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.463 [2024-04-26 14:24:41.757868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.463 [2024-04-26 14:24:41.757911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.463 [2024-04-26 14:24:41.757931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.463 [2024-04-26 14:24:41.766548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.463 [2024-04-26 14:24:41.766584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.463 [2024-04-26 14:24:41.766603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.463 [2024-04-26 14:24:41.774917] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.463 [2024-04-26 14:24:41.774951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.463 [2024-04-26 14:24:41.774969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.463 [2024-04-26 14:24:41.783159] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.463 [2024-04-26 14:24:41.783193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.463 [2024-04-26 14:24:41.783211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.463 [2024-04-26 14:24:41.791265] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.463 [2024-04-26 14:24:41.791299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.463 [2024-04-26 14:24:41.791317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.463 [2024-04-26 14:24:41.799198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.463 [2024-04-26 14:24:41.799232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.463 [2024-04-26 14:24:41.799250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.463 [2024-04-26 14:24:41.807153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.463 [2024-04-26 14:24:41.807185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.807203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.815054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.815086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.815104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.822926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.822959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.822977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.830906] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.830938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.830956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.838845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.838879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.838897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.846832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.846864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.846882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.854754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.854786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.854804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.862694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.862725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.862743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.870767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.870800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.870819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.879880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.879914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.879933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.888863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.888898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.888917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.897524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.897559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.897585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.906847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.906883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.906902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.916005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.916041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.916059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.924436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.924472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.924490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.932742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.932776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.932794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.937580] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.937615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.937641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.946333] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.946368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.946387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.954893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.954927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.954945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.962826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.962859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.962878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.970723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.970756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.970774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.978559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.978592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.978610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.986414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.986445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.464 [2024-04-26 14:24:41.986463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.464 [2024-04-26 14:24:41.994301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.464 [2024-04-26 14:24:41.994335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.465 [2024-04-26 14:24:41.994353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.465 [2024-04-26 14:24:42.002232] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.465 [2024-04-26 14:24:42.002264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.465 [2024-04-26 14:24:42.002282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.465 [2024-04-26 14:24:42.010193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.465 [2024-04-26 14:24:42.010226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.465 [2024-04-26 14:24:42.010244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.465 [2024-04-26 14:24:42.018096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.465 [2024-04-26 14:24:42.018128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.465 [2024-04-26 14:24:42.018146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.465 [2024-04-26 14:24:42.026046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.465 [2024-04-26 14:24:42.026079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.465 [2024-04-26 14:24:42.026096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.723 [2024-04-26 14:24:42.034083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.723 [2024-04-26 14:24:42.034118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.723 [2024-04-26 14:24:42.034143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.723 [2024-04-26 14:24:42.041956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.723 [2024-04-26 14:24:42.041992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.723 [2024-04-26 14:24:42.042010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.723 [2024-04-26 14:24:42.049852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.723 [2024-04-26 14:24:42.049886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.723 [2024-04-26 14:24:42.049904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.723 [2024-04-26 14:24:42.057744] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.723 [2024-04-26 14:24:42.057776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.723 [2024-04-26 14:24:42.057794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.723 [2024-04-26 14:24:42.065696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.723 [2024-04-26 14:24:42.065728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.723 [2024-04-26 14:24:42.065745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.723 [2024-04-26 14:24:42.073615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.723 [2024-04-26 14:24:42.073654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.723 [2024-04-26 14:24:42.073673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.723 [2024-04-26 14:24:42.081504] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.723 [2024-04-26 14:24:42.081542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.723 [2024-04-26 14:24:42.081559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.723 [2024-04-26 14:24:42.089425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.723 [2024-04-26 14:24:42.089457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.723 [2024-04-26 14:24:42.089475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.723 [2024-04-26 14:24:42.097366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.097398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.097415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.105243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.105281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.105300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.113167] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.113199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.113217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.121127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.121168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.121185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.129176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.129209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.129227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.137124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.137165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.137183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.145068] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.145102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.145127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.153040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.153073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.153091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.161078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.161110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.161128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.169034] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.169075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.169093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.177049] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.177081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.177099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.185467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.185500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.185518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.193423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.193455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.193473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.201353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.201386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.201403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.209332] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.209365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.209389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.217368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.217401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.217420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.225334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.225366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.225384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.233334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.233366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.233384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.241289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.241322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.241346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.249260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.249292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.249310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.257213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.257245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.257262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.265206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.265238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.265256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.273165] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.273206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.273224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.281150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.281190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.281208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.724 [2024-04-26 14:24:42.289236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.724 [2024-04-26 14:24:42.289270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.724 [2024-04-26 14:24:42.289289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.297279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.297314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.297334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.305187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.305220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.305238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.313048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.313086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.313105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.320946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.320979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.320997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.328778] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.328810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.328827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.336700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.336732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.336755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.344793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.344827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.344845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.353714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.353749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.353768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.362442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.362476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.362494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.371237] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.371271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.371290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.380289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.380324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.380343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.389396] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.389432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.389450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.398238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.398272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.398290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.406569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.406606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.406624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.415614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.415656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.415676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.423683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.423715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.423733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.431797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.431829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.431847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.439980] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.440016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.440034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.448004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.448037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.448055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.456013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.456045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.456069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.463986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.464018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.464036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.472063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.472097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.472115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.480121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.983 [2024-04-26 14:24:42.480156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-04-26 14:24:42.480175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.983 [2024-04-26 14:24:42.488078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.984 [2024-04-26 14:24:42.488110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-04-26 14:24:42.488127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.984 [2024-04-26 14:24:42.496070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.984 [2024-04-26 14:24:42.496102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-04-26 14:24:42.496119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.984 [2024-04-26 14:24:42.504058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.984 [2024-04-26 14:24:42.504090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-04-26 14:24:42.504108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.984 [2024-04-26 14:24:42.512049] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.984 [2024-04-26 14:24:42.512085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-04-26 14:24:42.512103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:00.984 [2024-04-26 14:24:42.520306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.984 [2024-04-26 14:24:42.520347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-04-26 14:24:42.520365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:00.984 [2024-04-26 14:24:42.528279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.984 [2024-04-26 14:24:42.528318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-04-26 14:24:42.528337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:00.984 [2024-04-26 14:24:42.536272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.984 [2024-04-26 14:24:42.536305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-04-26 14:24:42.536323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:00.984 [2024-04-26 14:24:42.544253] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:00.984 [2024-04-26 14:24:42.544288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-04-26 14:24:42.544306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.243 [2024-04-26 14:24:42.552235] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.243 [2024-04-26 14:24:42.552271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.243 [2024-04-26 14:24:42.552289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.243 [2024-04-26 14:24:42.560456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.243 [2024-04-26 14:24:42.560491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.243 [2024-04-26 14:24:42.560509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.243 [2024-04-26 14:24:42.568620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.243 [2024-04-26 14:24:42.568659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.243 [2024-04-26 14:24:42.568678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.243 [2024-04-26 14:24:42.576729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.243 [2024-04-26 14:24:42.576763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.243 [2024-04-26 14:24:42.576781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.243 [2024-04-26 14:24:42.584832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.243 [2024-04-26 14:24:42.584866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.243 [2024-04-26 14:24:42.584884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.243 [2024-04-26 14:24:42.592972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.243 [2024-04-26 14:24:42.593005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.243 [2024-04-26 14:24:42.593023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.243 [2024-04-26 14:24:42.601063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.243 [2024-04-26 14:24:42.601097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.243 [2024-04-26 14:24:42.601115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.243 [2024-04-26 14:24:42.609139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.243 [2024-04-26 14:24:42.609180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.243 [2024-04-26 14:24:42.609198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.243 [2024-04-26 14:24:42.617131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.243 [2024-04-26 14:24:42.617165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.243 [2024-04-26 14:24:42.617183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.243 [2024-04-26 14:24:42.625153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.243 [2024-04-26 14:24:42.625186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.243 [2024-04-26 14:24:42.625205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.243 [2024-04-26 14:24:42.633179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.243 [2024-04-26 14:24:42.633213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.243 [2024-04-26 14:24:42.633230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.243 [2024-04-26 14:24:42.641209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.243 [2024-04-26 14:24:42.641248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.243 [2024-04-26 14:24:42.641266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.243 [2024-04-26 14:24:42.649280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.243 [2024-04-26 14:24:42.649315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.243 [2024-04-26 14:24:42.649333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.243 [2024-04-26 14:24:42.657423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.244 [2024-04-26 14:24:42.657457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.244 [2024-04-26 14:24:42.657475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.244 [2024-04-26 14:24:42.665767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.244 [2024-04-26 14:24:42.665805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.244 [2024-04-26 14:24:42.665831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.244 [2024-04-26 14:24:42.673986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.244 [2024-04-26 14:24:42.674022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.244 [2024-04-26 14:24:42.674040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.244 [2024-04-26 14:24:42.682288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.244 [2024-04-26 14:24:42.682325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.244 [2024-04-26 14:24:42.682343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.244 [2024-04-26 14:24:42.686901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.244 [2024-04-26 14:24:42.686933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.244 [2024-04-26 14:24:42.686951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.244 [2024-04-26 14:24:42.694923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.244 [2024-04-26 14:24:42.694958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.244 [2024-04-26 14:24:42.694976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.244 [2024-04-26 14:24:42.703040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.244 [2024-04-26 14:24:42.703076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.244 [2024-04-26 14:24:42.703093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.244 [2024-04-26 14:24:42.711201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.244 [2024-04-26 14:24:42.711236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.244 [2024-04-26 14:24:42.711254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.244 [2024-04-26 14:24:42.719402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.244 [2024-04-26 14:24:42.719438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.244 [2024-04-26 14:24:42.719456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.244 [2024-04-26 14:24:42.727682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.244 [2024-04-26 14:24:42.727718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.244 [2024-04-26 14:24:42.727736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.244 [2024-04-26 14:24:42.735671] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.244 [2024-04-26 14:24:42.735716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.244 [2024-04-26 14:24:42.735734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.244 [2024-04-26 14:24:42.743845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.244 [2024-04-26 14:24:42.743881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.244 [2024-04-26 14:24:42.743900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.244 [2024-04-26 14:24:42.752038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.244 [2024-04-26 14:24:42.752072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.244 [2024-04-26 14:24:42.752091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.244 [2024-04-26 14:24:42.760238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.244 [2024-04-26 14:24:42.760274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.244 [2024-04-26 14:24:42.760292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.244 [2024-04-26 14:24:42.768059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.244 [2024-04-26 14:24:42.768097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.244 [2024-04-26 14:24:42.768116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.244 [2024-04-26 14:24:42.776176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.244 [2024-04-26 14:24:42.776213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.244 [2024-04-26 14:24:42.776233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.244 [2024-04-26 14:24:42.784495] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.244 [2024-04-26 14:24:42.784533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.244 [2024-04-26 14:24:42.784551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.244 [2024-04-26 14:24:42.792486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.244 [2024-04-26 14:24:42.792521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.244 [2024-04-26 14:24:42.792539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.244 [2024-04-26 14:24:42.800590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.244 [2024-04-26 14:24:42.800624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.244 [2024-04-26 14:24:42.800649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.244 [2024-04-26 14:24:42.809017] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.244 [2024-04-26 14:24:42.809054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.244 [2024-04-26 14:24:42.809072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.503 [2024-04-26 14:24:42.817155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.503 [2024-04-26 14:24:42.817192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.503 [2024-04-26 14:24:42.817210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.503 [2024-04-26 14:24:42.825364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.503 [2024-04-26 14:24:42.825402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.503 [2024-04-26 14:24:42.825421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.503 [2024-04-26 14:24:42.833467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.503 [2024-04-26 14:24:42.833502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.503 [2024-04-26 14:24:42.833520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.503 [2024-04-26 14:24:42.841717] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.503 [2024-04-26 14:24:42.841754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.503 [2024-04-26 14:24:42.841772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.503 [2024-04-26 14:24:42.849882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.503 [2024-04-26 14:24:42.849919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.503 [2024-04-26 14:24:42.849937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.503 [2024-04-26 14:24:42.858106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.503 [2024-04-26 14:24:42.858141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.503 [2024-04-26 14:24:42.858158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.503 [2024-04-26 14:24:42.866239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.503 [2024-04-26 14:24:42.866273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.503 [2024-04-26 14:24:42.866291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.503 [2024-04-26 14:24:42.874332] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.503 [2024-04-26 14:24:42.874367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.503 [2024-04-26 14:24:42.874395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.503 [2024-04-26 14:24:42.882400] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.503 [2024-04-26 14:24:42.882436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.503 [2024-04-26 14:24:42.882454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.503 [2024-04-26 14:24:42.890493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.503 [2024-04-26 14:24:42.890528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.503 [2024-04-26 14:24:42.890546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.503 [2024-04-26 14:24:42.898647] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.503 [2024-04-26 14:24:42.898682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.503 [2024-04-26 14:24:42.898700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:42.906812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:42.906848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:42.906866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:42.915031] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:42.915067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:42.915085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:42.923266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:42.923304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:42.923321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:42.931616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:42.931662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:42.931691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:42.939927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:42.939963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:42.939981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:42.948395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:42.948430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:42.948448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:42.956655] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:42.956710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:42.956730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:42.964804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:42.964839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:42.964857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:42.972697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:42.972734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:42.972753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:42.980935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:42.980972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:42.980990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:42.989089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:42.989124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:42.989143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:42.997105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:42.997140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:42.997158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:43.005246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:43.005281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:43.005299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:43.013376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:43.013411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:43.013437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:43.021699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:43.021738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:43.021756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:43.029883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:43.029922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:43.029940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:43.037953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:43.037997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:43.038020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:43.046170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:43.046205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:43.046223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:43.054230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:43.054264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:43.054282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:43.062406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:43.062441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:43.062459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.504 [2024-04-26 14:24:43.070583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.504 [2024-04-26 14:24:43.070621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.504 [2024-04-26 14:24:43.070647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.763 [2024-04-26 14:24:43.078724] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.763 [2024-04-26 14:24:43.078761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.763 [2024-04-26 14:24:43.078779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.763 [2024-04-26 14:24:43.086737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.763 [2024-04-26 14:24:43.086784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.763 [2024-04-26 14:24:43.086803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.763 [2024-04-26 14:24:43.094731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.763 [2024-04-26 14:24:43.094767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.763 [2024-04-26 14:24:43.094785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.763 [2024-04-26 14:24:43.102797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.763 [2024-04-26 14:24:43.102832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.763 [2024-04-26 14:24:43.102850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.763 [2024-04-26 14:24:43.110895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.763 [2024-04-26 14:24:43.110929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.763 [2024-04-26 14:24:43.110947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.763 [2024-04-26 14:24:43.119176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.763 [2024-04-26 14:24:43.119213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.763 [2024-04-26 14:24:43.119231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.763 [2024-04-26 14:24:43.127163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.763 [2024-04-26 14:24:43.127201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.763 [2024-04-26 14:24:43.127218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:01.763 [2024-04-26 14:24:43.135300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.763 [2024-04-26 14:24:43.135335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.763 [2024-04-26 14:24:43.135353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:01.763 [2024-04-26 14:24:43.143353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.763 [2024-04-26 14:24:43.143390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.763 [2024-04-26 14:24:43.143408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:01.763 [2024-04-26 14:24:43.151461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb6d190) 00:20:01.763 [2024-04-26 14:24:43.151496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.763 [2024-04-26 14:24:43.151514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:01.763 00:20:01.763 Latency(us) 00:20:01.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.763 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:01.763 nvme0n1 : 2.00 3849.41 481.18 0.00 0.00 4151.93 649.29 11311.03 00:20:01.763 =================================================================================================================== 00:20:01.763 Total : 3849.41 481.18 0.00 0.00 4151.93 649.29 11311.03 00:20:01.763 0 00:20:01.763 14:24:43 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:01.763 14:24:43 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:01.763 14:24:43 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:01.763 | .driver_specific 00:20:01.763 | .nvme_error 00:20:01.763 | .status_code 00:20:01.763 | .command_transient_transport_error' 00:20:01.763 14:24:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:02.021 14:24:43 -- host/digest.sh@71 -- # (( 248 > 0 )) 00:20:02.021 14:24:43 -- host/digest.sh@73 -- # killprocess 3207774 00:20:02.021 14:24:43 -- common/autotest_common.sh@936 -- # '[' -z 3207774 ']' 00:20:02.021 14:24:43 -- common/autotest_common.sh@940 -- # kill -0 3207774 00:20:02.021 14:24:43 -- common/autotest_common.sh@941 -- # uname 00:20:02.021 14:24:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:02.021 14:24:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3207774 00:20:02.021 14:24:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:02.021 14:24:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:02.021 14:24:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3207774' 00:20:02.021 killing process with pid 3207774 00:20:02.021 14:24:43 -- common/autotest_common.sh@955 -- # kill 3207774 00:20:02.021 Received shutdown signal, test time was about 2.000000 seconds 00:20:02.021 00:20:02.021 Latency(us) 00:20:02.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.021 =================================================================================================================== 00:20:02.021 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:02.021 14:24:43 -- common/autotest_common.sh@960 -- # wait 3207774 00:20:02.280 14:24:43 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:02.280 14:24:43 -- host/digest.sh@54 -- # local rw bs qd 00:20:02.280 14:24:43 -- host/digest.sh@56 -- # rw=randwrite 00:20:02.280 14:24:43 -- host/digest.sh@56 -- # bs=4096 00:20:02.280 14:24:43 -- host/digest.sh@56 -- # qd=128 00:20:02.280 14:24:43 -- host/digest.sh@58 -- # bperfpid=3208135 00:20:02.280 14:24:43 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:02.280 14:24:43 -- host/digest.sh@60 -- # waitforlisten 3208135 /var/tmp/bperf.sock 00:20:02.280 14:24:43 -- common/autotest_common.sh@817 -- # '[' -z 3208135 ']' 00:20:02.280 14:24:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:02.280 14:24:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:02.280 14:24:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:02.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:02.280 14:24:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:02.280 14:24:43 -- common/autotest_common.sh@10 -- # set +x 00:20:02.280 [2024-04-26 14:24:43.755505] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:20:02.280 [2024-04-26 14:24:43.755606] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208135 ] 00:20:02.280 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.280 [2024-04-26 14:24:43.815785] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.538 [2024-04-26 14:24:43.929928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.538 14:24:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:02.538 14:24:44 -- common/autotest_common.sh@850 -- # return 0 00:20:02.538 14:24:44 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:02.538 14:24:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:02.841 14:24:44 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:02.841 14:24:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:02.841 14:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:02.841 14:24:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:02.841 14:24:44 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:02.841 14:24:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:03.417 nvme0n1 00:20:03.417 14:24:44 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:03.417 14:24:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:03.417 14:24:44 -- common/autotest_common.sh@10 -- # set +x 00:20:03.417 14:24:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:03.417 14:24:44 -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:03.417 14:24:44 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:03.417 Running I/O for 2 seconds... 00:20:03.417 [2024-04-26 14:24:44.865945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190fa3a0 00:20:03.417 [2024-04-26 14:24:44.867135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.418 [2024-04-26 14:24:44.867177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:03.418 [2024-04-26 14:24:44.880699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190eaab8 00:20:03.418 [2024-04-26 14:24:44.882052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.418 [2024-04-26 14:24:44.882083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:03.418 [2024-04-26 14:24:44.895379] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190e8d30 00:20:03.418 [2024-04-26 14:24:44.896937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.418 [2024-04-26 14:24:44.896968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:03.418 [2024-04-26 14:24:44.910094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190f0bc0 00:20:03.418 [2024-04-26 14:24:44.911849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.418 [2024-04-26 14:24:44.911880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:03.418 [2024-04-26 14:24:44.924800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190fa3a0 00:20:03.418 [2024-04-26 14:24:44.926749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.418 [2024-04-26 14:24:44.926779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:03.418 [2024-04-26 14:24:44.939669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190f5378 00:20:03.418 [2024-04-26 14:24:44.941823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.418 [2024-04-26 14:24:44.941854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:03.418 [2024-04-26 14:24:44.954349] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190e49b0 00:20:03.418 [2024-04-26 14:24:44.956684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.418 [2024-04-26 14:24:44.956713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:03.418 [2024-04-26 14:24:44.965614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190fcdd0 00:20:03.418 [2024-04-26 14:24:44.967160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.418 [2024-04-26 14:24:44.967189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:03.418 [2024-04-26 14:24:44.980261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190eb328 00:20:03.418 [2024-04-26 14:24:44.981987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.418 [2024-04-26 14:24:44.982017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:03.676 [2024-04-26 14:24:44.994898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190fb8b8 00:20:03.676 [2024-04-26 14:24:44.996834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.676 [2024-04-26 14:24:44.996865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:03.676 [2024-04-26 14:24:45.009507] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190e38d0 00:20:03.676 [2024-04-26 14:24:45.011640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.676 [2024-04-26 14:24:45.011669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:03.676 [2024-04-26 14:24:45.024148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190ddc00 00:20:03.676 [2024-04-26 14:24:45.026459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.676 [2024-04-26 14:24:45.026488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:03.676 [2024-04-26 14:24:45.034072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190f35f0 00:20:03.676 [2024-04-26 14:24:45.035027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.676 [2024-04-26 14:24:45.035056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:03.676 [2024-04-26 14:24:45.049952] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190ff3c8 00:20:03.676 [2024-04-26 14:24:45.051673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.676 [2024-04-26 14:24:45.051704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:03.676 [2024-04-26 14:24:45.064565] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190f2d80 00:20:03.676 [2024-04-26 14:24:45.066485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.676 [2024-04-26 14:24:45.066515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:03.676 [2024-04-26 14:24:45.079176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190e6b70 00:20:03.676 [2024-04-26 14:24:45.081294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.676 [2024-04-26 14:24:45.081323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:03.676 [2024-04-26 14:24:45.093787] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190eaab8 00:20:03.676 [2024-04-26 14:24:45.096094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.676 [2024-04-26 14:24:45.096123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:03.676 [2024-04-26 14:24:45.103712] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190e27f0 00:20:03.676 [2024-04-26 14:24:45.104657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.676 [2024-04-26 14:24:45.104686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:03.676 [2024-04-26 14:24:45.118337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190f81e0 00:20:03.676 [2024-04-26 14:24:45.119497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.676 [2024-04-26 14:24:45.119526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:03.676 [2024-04-26 14:24:45.131977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190eb328 00:20:03.676 [2024-04-26 14:24:45.133118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.676 [2024-04-26 14:24:45.133147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:03.676 [2024-04-26 14:24:45.146427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190f4298 00:20:03.676 [2024-04-26 14:24:45.147575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.676 [2024-04-26 14:24:45.147605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:03.676 [2024-04-26 14:24:45.161147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.676 [2024-04-26 14:24:45.161367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.676 [2024-04-26 14:24:45.161396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.676 [2024-04-26 14:24:45.176188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.676 [2024-04-26 14:24:45.176411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.676 [2024-04-26 14:24:45.176446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.676 [2024-04-26 14:24:45.191168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.676 [2024-04-26 14:24:45.191386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.676 [2024-04-26 14:24:45.191415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.676 [2024-04-26 14:24:45.206151] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.676 [2024-04-26 14:24:45.206366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.676 [2024-04-26 14:24:45.206394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.676 [2024-04-26 14:24:45.221116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.676 [2024-04-26 14:24:45.221332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.676 [2024-04-26 14:24:45.221360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.676 [2024-04-26 14:24:45.236148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.676 [2024-04-26 14:24:45.236362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.676 [2024-04-26 14:24:45.236390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.934 [2024-04-26 14:24:45.251132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.934 [2024-04-26 14:24:45.251348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.934 [2024-04-26 14:24:45.251378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.934 [2024-04-26 14:24:45.266101] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.935 [2024-04-26 14:24:45.266321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.935 [2024-04-26 14:24:45.266350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.935 [2024-04-26 14:24:45.281051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.935 [2024-04-26 14:24:45.281272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.935 [2024-04-26 14:24:45.281300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.935 [2024-04-26 14:24:45.296047] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.935 [2024-04-26 14:24:45.296264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.935 [2024-04-26 14:24:45.296293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.935 [2024-04-26 14:24:45.311031] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.935 [2024-04-26 14:24:45.311258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.935 [2024-04-26 14:24:45.311286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.935 [2024-04-26 14:24:45.326042] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.935 [2024-04-26 14:24:45.326264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.935 [2024-04-26 14:24:45.326292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.935 [2024-04-26 14:24:45.340979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.935 [2024-04-26 14:24:45.341192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.935 [2024-04-26 14:24:45.341221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.935 [2024-04-26 14:24:45.355965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.935 [2024-04-26 14:24:45.356179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.935 [2024-04-26 14:24:45.356207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.935 [2024-04-26 14:24:45.370891] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.935 [2024-04-26 14:24:45.371109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.935 [2024-04-26 14:24:45.371138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.935 [2024-04-26 14:24:45.385988] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.935 [2024-04-26 14:24:45.386208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.935 [2024-04-26 14:24:45.386236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.935 [2024-04-26 14:24:45.400957] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.935 [2024-04-26 14:24:45.401177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.935 [2024-04-26 14:24:45.401205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.935 [2024-04-26 14:24:45.415930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.935 [2024-04-26 14:24:45.416145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.935 [2024-04-26 14:24:45.416172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.935 [2024-04-26 14:24:45.430924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.935 [2024-04-26 14:24:45.431140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.935 [2024-04-26 14:24:45.431168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.935 [2024-04-26 14:24:45.445918] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.935 [2024-04-26 14:24:45.446135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.935 [2024-04-26 14:24:45.446163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.935 [2024-04-26 14:24:45.460908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.935 [2024-04-26 14:24:45.461123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.935 [2024-04-26 14:24:45.461151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.935 [2024-04-26 14:24:45.475897] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.935 [2024-04-26 14:24:45.476117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.935 [2024-04-26 14:24:45.476145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:03.935 [2024-04-26 14:24:45.490835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:03.935 [2024-04-26 14:24:45.491050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:03.935 [2024-04-26 14:24:45.491079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.194 [2024-04-26 14:24:45.505808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.194 [2024-04-26 14:24:45.506022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.194 [2024-04-26 14:24:45.506052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.194 [2024-04-26 14:24:45.520826] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.194 [2024-04-26 14:24:45.521041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.194 [2024-04-26 14:24:45.521069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.194 [2024-04-26 14:24:45.535804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.194 [2024-04-26 14:24:45.536020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.194 [2024-04-26 14:24:45.536048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.194 [2024-04-26 14:24:45.550769] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.194 [2024-04-26 14:24:45.550983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.194 [2024-04-26 14:24:45.551013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.194 [2024-04-26 14:24:45.565754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.194 [2024-04-26 14:24:45.565968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.194 [2024-04-26 14:24:45.566003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.194 [2024-04-26 14:24:45.580767] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.194 [2024-04-26 14:24:45.580990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.194 [2024-04-26 14:24:45.581018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.194 [2024-04-26 14:24:45.595704] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.194 [2024-04-26 14:24:45.595921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.194 [2024-04-26 14:24:45.595949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.194 [2024-04-26 14:24:45.610689] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.194 [2024-04-26 14:24:45.610907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.194 [2024-04-26 14:24:45.610935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.194 [2024-04-26 14:24:45.625647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.194 [2024-04-26 14:24:45.625865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.194 [2024-04-26 14:24:45.625893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.194 [2024-04-26 14:24:45.640707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.194 [2024-04-26 14:24:45.640926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.194 [2024-04-26 14:24:45.640954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.194 [2024-04-26 14:24:45.655646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.194 [2024-04-26 14:24:45.655860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.194 [2024-04-26 14:24:45.655889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.194 [2024-04-26 14:24:45.670582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.194 [2024-04-26 14:24:45.670806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.194 [2024-04-26 14:24:45.670835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.194 [2024-04-26 14:24:45.685518] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.194 [2024-04-26 14:24:45.685744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.194 [2024-04-26 14:24:45.685771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.194 [2024-04-26 14:24:45.700476] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.194 [2024-04-26 14:24:45.700709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.194 [2024-04-26 14:24:45.700737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.194 [2024-04-26 14:24:45.715447] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.194 [2024-04-26 14:24:45.715666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.194 [2024-04-26 14:24:45.715695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.194 [2024-04-26 14:24:45.730423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.194 [2024-04-26 14:24:45.730645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.194 [2024-04-26 14:24:45.730674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.194 [2024-04-26 14:24:45.745397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.194 [2024-04-26 14:24:45.745614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.194 [2024-04-26 14:24:45.745651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.194 [2024-04-26 14:24:45.760303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.194 [2024-04-26 14:24:45.760526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.195 [2024-04-26 14:24:45.760556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.453 [2024-04-26 14:24:45.775295] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.453 [2024-04-26 14:24:45.775512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.453 [2024-04-26 14:24:45.775542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.453 [2024-04-26 14:24:45.790392] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.453 [2024-04-26 14:24:45.790608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.453 [2024-04-26 14:24:45.790644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.453 [2024-04-26 14:24:45.805389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.453 [2024-04-26 14:24:45.805608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.453 [2024-04-26 14:24:45.805645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.453 [2024-04-26 14:24:45.820391] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.453 [2024-04-26 14:24:45.820606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.453 [2024-04-26 14:24:45.820642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.453 [2024-04-26 14:24:45.835308] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.453 [2024-04-26 14:24:45.835525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.453 [2024-04-26 14:24:45.835553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.453 [2024-04-26 14:24:45.850300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.453 [2024-04-26 14:24:45.850521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.453 [2024-04-26 14:24:45.850551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.453 [2024-04-26 14:24:45.865237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.454 [2024-04-26 14:24:45.865453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.454 [2024-04-26 14:24:45.865482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.454 [2024-04-26 14:24:45.880276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.454 [2024-04-26 14:24:45.880503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.454 [2024-04-26 14:24:45.880530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.454 [2024-04-26 14:24:45.895369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.454 [2024-04-26 14:24:45.895587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.454 [2024-04-26 14:24:45.895615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.454 [2024-04-26 14:24:45.910376] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.454 [2024-04-26 14:24:45.910595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.454 [2024-04-26 14:24:45.910623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.454 [2024-04-26 14:24:45.925329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.454 [2024-04-26 14:24:45.925547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.454 [2024-04-26 14:24:45.925581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.454 [2024-04-26 14:24:45.940497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.454 [2024-04-26 14:24:45.940724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.454 [2024-04-26 14:24:45.940754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.454 [2024-04-26 14:24:45.955475] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.454 [2024-04-26 14:24:45.955704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.454 [2024-04-26 14:24:45.955733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.454 [2024-04-26 14:24:45.970477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.454 [2024-04-26 14:24:45.970703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.454 [2024-04-26 14:24:45.970732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.454 [2024-04-26 14:24:45.985476] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.454 [2024-04-26 14:24:45.985701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.454 [2024-04-26 14:24:45.985729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.454 [2024-04-26 14:24:46.000532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.454 [2024-04-26 14:24:46.000759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.454 [2024-04-26 14:24:46.000786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.454 [2024-04-26 14:24:46.015525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.454 [2024-04-26 14:24:46.015751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.454 [2024-04-26 14:24:46.015779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.712 [2024-04-26 14:24:46.030487] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.712 [2024-04-26 14:24:46.030711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.712 [2024-04-26 14:24:46.030741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.712 [2024-04-26 14:24:46.045448] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.712 [2024-04-26 14:24:46.045667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.712 [2024-04-26 14:24:46.045696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.712 [2024-04-26 14:24:46.060400] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.712 [2024-04-26 14:24:46.060619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.712 [2024-04-26 14:24:46.060655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.712 [2024-04-26 14:24:46.075353] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.712 [2024-04-26 14:24:46.075568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.712 [2024-04-26 14:24:46.075596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.712 [2024-04-26 14:24:46.090307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.712 [2024-04-26 14:24:46.090523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.712 [2024-04-26 14:24:46.090558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.712 [2024-04-26 14:24:46.105279] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.712 [2024-04-26 14:24:46.105494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.712 [2024-04-26 14:24:46.105523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.712 [2024-04-26 14:24:46.120231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.712 [2024-04-26 14:24:46.120449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.712 [2024-04-26 14:24:46.120477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.712 [2024-04-26 14:24:46.135183] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.712 [2024-04-26 14:24:46.135397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.712 [2024-04-26 14:24:46.135426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.712 [2024-04-26 14:24:46.150231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.712 [2024-04-26 14:24:46.150449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.712 [2024-04-26 14:24:46.150478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.712 [2024-04-26 14:24:46.165159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.712 [2024-04-26 14:24:46.165385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.712 [2024-04-26 14:24:46.165413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.712 [2024-04-26 14:24:46.180192] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.712 [2024-04-26 14:24:46.180411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.712 [2024-04-26 14:24:46.180440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.712 [2024-04-26 14:24:46.195123] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.712 [2024-04-26 14:24:46.195338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.712 [2024-04-26 14:24:46.195367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.712 [2024-04-26 14:24:46.210115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.712 [2024-04-26 14:24:46.210330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.712 [2024-04-26 14:24:46.210358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.712 [2024-04-26 14:24:46.225034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.712 [2024-04-26 14:24:46.225261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.712 [2024-04-26 14:24:46.225289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.712 [2024-04-26 14:24:46.239996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.712 [2024-04-26 14:24:46.240214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.712 [2024-04-26 14:24:46.240242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.712 [2024-04-26 14:24:46.254929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.712 [2024-04-26 14:24:46.255146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.712 [2024-04-26 14:24:46.255175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.712 [2024-04-26 14:24:46.269863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.712 [2024-04-26 14:24:46.270087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.712 [2024-04-26 14:24:46.270114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.974 [2024-04-26 14:24:46.284817] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.974 [2024-04-26 14:24:46.285034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.974 [2024-04-26 14:24:46.285064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.974 [2024-04-26 14:24:46.299755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.974 [2024-04-26 14:24:46.299970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.974 [2024-04-26 14:24:46.299999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.974 [2024-04-26 14:24:46.314698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.974 [2024-04-26 14:24:46.314920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.974 [2024-04-26 14:24:46.314947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.974 [2024-04-26 14:24:46.329602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.974 [2024-04-26 14:24:46.329826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.974 [2024-04-26 14:24:46.329854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.974 [2024-04-26 14:24:46.344585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.974 [2024-04-26 14:24:46.344812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.974 [2024-04-26 14:24:46.344841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.974 [2024-04-26 14:24:46.359534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.974 [2024-04-26 14:24:46.359763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.974 [2024-04-26 14:24:46.359791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.974 [2024-04-26 14:24:46.374484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.974 [2024-04-26 14:24:46.374709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.974 [2024-04-26 14:24:46.374737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.974 [2024-04-26 14:24:46.389446] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.974 [2024-04-26 14:24:46.389666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.974 [2024-04-26 14:24:46.389698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.974 [2024-04-26 14:24:46.404484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.974 [2024-04-26 14:24:46.404709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.974 [2024-04-26 14:24:46.404737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.974 [2024-04-26 14:24:46.419496] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.974 [2024-04-26 14:24:46.419722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.974 [2024-04-26 14:24:46.419750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.974 [2024-04-26 14:24:46.434430] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.974 [2024-04-26 14:24:46.434652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.974 [2024-04-26 14:24:46.434680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.974 [2024-04-26 14:24:46.449345] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.974 [2024-04-26 14:24:46.449571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.974 [2024-04-26 14:24:46.449599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.974 [2024-04-26 14:24:46.464272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.974 [2024-04-26 14:24:46.464485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.974 [2024-04-26 14:24:46.464513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.974 [2024-04-26 14:24:46.479213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.974 [2024-04-26 14:24:46.479428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.974 [2024-04-26 14:24:46.479462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.974 [2024-04-26 14:24:46.494107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.974 [2024-04-26 14:24:46.494320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.974 [2024-04-26 14:24:46.494349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.974 [2024-04-26 14:24:46.509051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.974 [2024-04-26 14:24:46.509268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.974 [2024-04-26 14:24:46.509296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.974 [2024-04-26 14:24:46.523953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.974 [2024-04-26 14:24:46.524170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.974 [2024-04-26 14:24:46.524198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:04.974 [2024-04-26 14:24:46.538922] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:04.974 [2024-04-26 14:24:46.539156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:04.974 [2024-04-26 14:24:46.539184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.232 [2024-04-26 14:24:46.553913] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.232 [2024-04-26 14:24:46.554129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.232 [2024-04-26 14:24:46.554158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.232 [2024-04-26 14:24:46.568851] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.232 [2024-04-26 14:24:46.569070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.232 [2024-04-26 14:24:46.569101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.232 [2024-04-26 14:24:46.583852] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.232 [2024-04-26 14:24:46.584067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.232 [2024-04-26 14:24:46.584095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.232 [2024-04-26 14:24:46.598787] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.232 [2024-04-26 14:24:46.599002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.232 [2024-04-26 14:24:46.599031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.232 [2024-04-26 14:24:46.613818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.232 [2024-04-26 14:24:46.614040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.232 [2024-04-26 14:24:46.614068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.232 [2024-04-26 14:24:46.628854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.232 [2024-04-26 14:24:46.629072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.232 [2024-04-26 14:24:46.629100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.232 [2024-04-26 14:24:46.643824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.232 [2024-04-26 14:24:46.644040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.232 [2024-04-26 14:24:46.644069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.232 [2024-04-26 14:24:46.658895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.232 [2024-04-26 14:24:46.659114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.232 [2024-04-26 14:24:46.659142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.232 [2024-04-26 14:24:46.673872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.232 [2024-04-26 14:24:46.674088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.232 [2024-04-26 14:24:46.674117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.232 [2024-04-26 14:24:46.688792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.232 [2024-04-26 14:24:46.689008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.232 [2024-04-26 14:24:46.689036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.233 [2024-04-26 14:24:46.703727] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.233 [2024-04-26 14:24:46.703946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.233 [2024-04-26 14:24:46.703975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.233 [2024-04-26 14:24:46.718673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.233 [2024-04-26 14:24:46.718891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.233 [2024-04-26 14:24:46.718919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.233 [2024-04-26 14:24:46.733619] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.233 [2024-04-26 14:24:46.733848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.233 [2024-04-26 14:24:46.733877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.233 [2024-04-26 14:24:46.748592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.233 [2024-04-26 14:24:46.748821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.233 [2024-04-26 14:24:46.748851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.233 [2024-04-26 14:24:46.763535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.233 [2024-04-26 14:24:46.763762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.233 [2024-04-26 14:24:46.763791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.233 [2024-04-26 14:24:46.778474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.233 [2024-04-26 14:24:46.778697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.233 [2024-04-26 14:24:46.778725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.233 [2024-04-26 14:24:46.793396] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.233 [2024-04-26 14:24:46.793610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.233 [2024-04-26 14:24:46.793648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.491 [2024-04-26 14:24:46.808348] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.491 [2024-04-26 14:24:46.808564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.491 [2024-04-26 14:24:46.808596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.491 [2024-04-26 14:24:46.823476] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.491 [2024-04-26 14:24:46.823700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.491 [2024-04-26 14:24:46.823729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.491 [2024-04-26 14:24:46.838476] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.491 [2024-04-26 14:24:46.838715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.491 [2024-04-26 14:24:46.838743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.491 [2024-04-26 14:24:46.853423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x999360) with pdu=0x2000190dfdc0 00:20:05.491 [2024-04-26 14:24:46.853642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.491 [2024-04-26 14:24:46.853671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.491 00:20:05.491 Latency(us) 00:20:05.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.491 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:05.491 nvme0n1 : 2.01 17159.81 67.03 0.00 0.00 7440.50 3446.71 19029.71 00:20:05.491 =================================================================================================================== 00:20:05.491 Total : 17159.81 67.03 0.00 0.00 7440.50 3446.71 19029.71 00:20:05.491 0 00:20:05.491 14:24:46 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:05.491 14:24:46 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:05.491 14:24:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:05.491 14:24:46 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:05.491 | .driver_specific 00:20:05.491 | .nvme_error 00:20:05.491 | .status_code 00:20:05.491 | .command_transient_transport_error' 00:20:05.751 14:24:47 -- host/digest.sh@71 -- # (( 135 > 0 )) 00:20:05.751 14:24:47 -- host/digest.sh@73 -- # killprocess 3208135 00:20:05.751 14:24:47 -- common/autotest_common.sh@936 -- # '[' -z 3208135 ']' 00:20:05.751 14:24:47 -- common/autotest_common.sh@940 -- # kill -0 3208135 00:20:05.751 14:24:47 -- common/autotest_common.sh@941 -- # uname 00:20:05.751 14:24:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:05.751 14:24:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3208135 00:20:05.751 14:24:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:05.751 14:24:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:05.751 14:24:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3208135' 00:20:05.751 killing process with pid 3208135 00:20:05.751 14:24:47 -- common/autotest_common.sh@955 -- # kill 3208135 00:20:05.751 Received shutdown signal, test time was about 2.000000 seconds 00:20:05.751 00:20:05.751 Latency(us) 00:20:05.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.751 =================================================================================================================== 00:20:05.751 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.751 14:24:47 -- common/autotest_common.sh@960 -- # wait 3208135 00:20:06.009 14:24:47 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:06.009 14:24:47 -- host/digest.sh@54 -- # local rw bs qd 00:20:06.009 14:24:47 -- host/digest.sh@56 -- # rw=randwrite 00:20:06.009 14:24:47 -- host/digest.sh@56 -- # bs=131072 00:20:06.009 14:24:47 -- host/digest.sh@56 -- # qd=16 00:20:06.009 14:24:47 -- host/digest.sh@58 -- # bperfpid=3208453 00:20:06.009 14:24:47 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:06.009 14:24:47 -- host/digest.sh@60 -- # waitforlisten 3208453 /var/tmp/bperf.sock 00:20:06.009 14:24:47 -- common/autotest_common.sh@817 -- # '[' -z 3208453 ']' 00:20:06.009 14:24:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:06.009 14:24:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:06.009 14:24:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:06.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:06.009 14:24:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:06.009 14:24:47 -- common/autotest_common.sh@10 -- # set +x 00:20:06.009 [2024-04-26 14:24:47.457471] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:20:06.009 [2024-04-26 14:24:47.457572] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208453 ] 00:20:06.009 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:06.009 Zero copy mechanism will not be used. 00:20:06.009 EAL: No free 2048 kB hugepages reported on node 1 00:20:06.009 [2024-04-26 14:24:47.517540] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.267 [2024-04-26 14:24:47.632603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.267 14:24:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:06.267 14:24:47 -- common/autotest_common.sh@850 -- # return 0 00:20:06.267 14:24:47 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:06.267 14:24:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:06.526 14:24:48 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:06.526 14:24:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.526 14:24:48 -- common/autotest_common.sh@10 -- # set +x 00:20:06.526 14:24:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.526 14:24:48 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:06.526 14:24:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:07.092 nvme0n1 00:20:07.092 14:24:48 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:07.092 14:24:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.092 14:24:48 -- common/autotest_common.sh@10 -- # set +x 00:20:07.092 14:24:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.092 14:24:48 -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:07.092 14:24:48 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:07.351 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:07.351 Zero copy mechanism will not be used. 00:20:07.351 Running I/O for 2 seconds... 00:20:07.351 [2024-04-26 14:24:48.731029] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.731425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.731468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.739347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.739741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.739775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.748051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.748432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.748466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.756838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.757213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.757246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.764431] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.764827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.764862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.773045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.773415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.773456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.781121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.781497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.781531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.789253] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.789648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.789682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.798278] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.798649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.798683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.807580] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.807960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.807994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.817023] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.817389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.817422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.825776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.825929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.825960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.835060] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.835428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.835462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.844298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.844684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.844718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.852915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.853291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.853323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.862077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.862441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.862474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.871195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.871556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.871590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.880727] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.881097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.881130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.890386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.890761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.890795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.898530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.898894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.898928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.905803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.906173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.906207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.351 [2024-04-26 14:24:48.913598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.351 [2024-04-26 14:24:48.913977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.351 [2024-04-26 14:24:48.914010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.610 [2024-04-26 14:24:48.921154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.610 [2024-04-26 14:24:48.921519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.610 [2024-04-26 14:24:48.921561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.610 [2024-04-26 14:24:48.929942] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.610 [2024-04-26 14:24:48.930309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.610 [2024-04-26 14:24:48.930344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.610 [2024-04-26 14:24:48.937783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.610 [2024-04-26 14:24:48.938153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.610 [2024-04-26 14:24:48.938187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.610 [2024-04-26 14:24:48.945680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.610 [2024-04-26 14:24:48.946043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.610 [2024-04-26 14:24:48.946078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.610 [2024-04-26 14:24:48.953293] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.610 [2024-04-26 14:24:48.953666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.610 [2024-04-26 14:24:48.953700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.610 [2024-04-26 14:24:48.961214] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.610 [2024-04-26 14:24:48.961586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.610 [2024-04-26 14:24:48.961619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.610 [2024-04-26 14:24:48.969007] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.610 [2024-04-26 14:24:48.969367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.610 [2024-04-26 14:24:48.969401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.610 [2024-04-26 14:24:48.976499] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.610 [2024-04-26 14:24:48.976871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.610 [2024-04-26 14:24:48.976905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.610 [2024-04-26 14:24:48.984207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:48.984574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:48.984607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.611 [2024-04-26 14:24:48.991824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:48.992230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:48.992265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.611 [2024-04-26 14:24:48.998768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:48.999135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:48.999168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.611 [2024-04-26 14:24:49.006381] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:49.006749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:49.006783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.611 [2024-04-26 14:24:49.013674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:49.014036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:49.014069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.611 [2024-04-26 14:24:49.021340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:49.021716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:49.021750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.611 [2024-04-26 14:24:49.029126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:49.029493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:49.029527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.611 [2024-04-26 14:24:49.036874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:49.037243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:49.037276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.611 [2024-04-26 14:24:49.044034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:49.044264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:49.044299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.611 [2024-04-26 14:24:49.051585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:49.051956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:49.051990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.611 [2024-04-26 14:24:49.059325] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:49.059697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:49.059730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.611 [2024-04-26 14:24:49.067273] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:49.067649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:49.067681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.611 [2024-04-26 14:24:49.074973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:49.075340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:49.075374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.611 [2024-04-26 14:24:49.082329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:49.082710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:49.082744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.611 [2024-04-26 14:24:49.090188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:49.090559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:49.090592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.611 [2024-04-26 14:24:49.098325] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:49.098696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:49.098729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.611 [2024-04-26 14:24:49.105540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:49.105905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:49.105939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.611 [2024-04-26 14:24:49.112431] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:49.112563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:49.112599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.611 [2024-04-26 14:24:49.119755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:49.120098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:49.120138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.611 [2024-04-26 14:24:49.126949] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:49.127295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:49.127329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.611 [2024-04-26 14:24:49.133995] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.611 [2024-04-26 14:24:49.134333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.611 [2024-04-26 14:24:49.134366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.612 [2024-04-26 14:24:49.140998] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.612 [2024-04-26 14:24:49.141342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.612 [2024-04-26 14:24:49.141375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.612 [2024-04-26 14:24:49.148103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.612 [2024-04-26 14:24:49.148444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.612 [2024-04-26 14:24:49.148477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.612 [2024-04-26 14:24:49.155160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.612 [2024-04-26 14:24:49.155504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.612 [2024-04-26 14:24:49.155537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.612 [2024-04-26 14:24:49.162190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.612 [2024-04-26 14:24:49.162533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.612 [2024-04-26 14:24:49.162567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.612 [2024-04-26 14:24:49.169600] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.612 [2024-04-26 14:24:49.169951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.612 [2024-04-26 14:24:49.169984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.612 [2024-04-26 14:24:49.176654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.612 [2024-04-26 14:24:49.177023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.612 [2024-04-26 14:24:49.177057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.871 [2024-04-26 14:24:49.183663] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.871 [2024-04-26 14:24:49.184031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.871 [2024-04-26 14:24:49.184064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.871 [2024-04-26 14:24:49.190695] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.871 [2024-04-26 14:24:49.191059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.871 [2024-04-26 14:24:49.191093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.871 [2024-04-26 14:24:49.197542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.871 [2024-04-26 14:24:49.197901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.871 [2024-04-26 14:24:49.197934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.871 [2024-04-26 14:24:49.204040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.871 [2024-04-26 14:24:49.204358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.871 [2024-04-26 14:24:49.204394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.871 [2024-04-26 14:24:49.210068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.871 [2024-04-26 14:24:49.210390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.871 [2024-04-26 14:24:49.210423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.871 [2024-04-26 14:24:49.216598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.871 [2024-04-26 14:24:49.216948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.871 [2024-04-26 14:24:49.216982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.871 [2024-04-26 14:24:49.224049] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.871 [2024-04-26 14:24:49.224501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.871 [2024-04-26 14:24:49.224534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.871 [2024-04-26 14:24:49.232315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.871 [2024-04-26 14:24:49.232716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.871 [2024-04-26 14:24:49.232750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.871 [2024-04-26 14:24:49.240883] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.871 [2024-04-26 14:24:49.241325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.871 [2024-04-26 14:24:49.241359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.871 [2024-04-26 14:24:49.249803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.871 [2024-04-26 14:24:49.250159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.871 [2024-04-26 14:24:49.250194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.871 [2024-04-26 14:24:49.257189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.871 [2024-04-26 14:24:49.257514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.871 [2024-04-26 14:24:49.257548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.871 [2024-04-26 14:24:49.264171] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.871 [2024-04-26 14:24:49.264496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.871 [2024-04-26 14:24:49.264529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.871 [2024-04-26 14:24:49.271613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.871 [2024-04-26 14:24:49.271951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.871 [2024-04-26 14:24:49.271985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.871 [2024-04-26 14:24:49.279040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.871 [2024-04-26 14:24:49.279389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.871 [2024-04-26 14:24:49.279423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.871 [2024-04-26 14:24:49.286738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.871 [2024-04-26 14:24:49.287168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.871 [2024-04-26 14:24:49.287202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.871 [2024-04-26 14:24:49.295794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.871 [2024-04-26 14:24:49.296115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.871 [2024-04-26 14:24:49.296154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.871 [2024-04-26 14:24:49.303576] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.871 [2024-04-26 14:24:49.303978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.871 [2024-04-26 14:24:49.304011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.872 [2024-04-26 14:24:49.312204] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.872 [2024-04-26 14:24:49.312592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.872 [2024-04-26 14:24:49.312639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.872 [2024-04-26 14:24:49.320894] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.872 [2024-04-26 14:24:49.321241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.872 [2024-04-26 14:24:49.321276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.872 [2024-04-26 14:24:49.329673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.872 [2024-04-26 14:24:49.330098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.872 [2024-04-26 14:24:49.330131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.872 [2024-04-26 14:24:49.338242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.872 [2024-04-26 14:24:49.338674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.872 [2024-04-26 14:24:49.338707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.872 [2024-04-26 14:24:49.346963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.872 [2024-04-26 14:24:49.347421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.872 [2024-04-26 14:24:49.347454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.872 [2024-04-26 14:24:49.355824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.872 [2024-04-26 14:24:49.356183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.872 [2024-04-26 14:24:49.356217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.872 [2024-04-26 14:24:49.364499] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.872 [2024-04-26 14:24:49.364903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.872 [2024-04-26 14:24:49.364936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.872 [2024-04-26 14:24:49.373176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.872 [2024-04-26 14:24:49.373562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.872 [2024-04-26 14:24:49.373595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.872 [2024-04-26 14:24:49.382094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.872 [2024-04-26 14:24:49.382519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.872 [2024-04-26 14:24:49.382553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.872 [2024-04-26 14:24:49.389816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.872 [2024-04-26 14:24:49.390160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.872 [2024-04-26 14:24:49.390193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.872 [2024-04-26 14:24:49.397559] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.872 [2024-04-26 14:24:49.397946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.872 [2024-04-26 14:24:49.397978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:07.872 [2024-04-26 14:24:49.405753] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.872 [2024-04-26 14:24:49.406080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.872 [2024-04-26 14:24:49.406120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:07.872 [2024-04-26 14:24:49.414424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.872 [2024-04-26 14:24:49.414872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.872 [2024-04-26 14:24:49.414905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:07.872 [2024-04-26 14:24:49.423095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.872 [2024-04-26 14:24:49.423449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.872 [2024-04-26 14:24:49.423482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:07.872 [2024-04-26 14:24:49.431788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:07.872 [2024-04-26 14:24:49.432197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.872 [2024-04-26 14:24:49.432230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.131 [2024-04-26 14:24:49.440221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.131 [2024-04-26 14:24:49.440563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.131 [2024-04-26 14:24:49.440597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.131 [2024-04-26 14:24:49.448708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.131 [2024-04-26 14:24:49.449120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.131 [2024-04-26 14:24:49.449153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.131 [2024-04-26 14:24:49.457471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.131 [2024-04-26 14:24:49.457834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.131 [2024-04-26 14:24:49.457868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.131 [2024-04-26 14:24:49.465699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.131 [2024-04-26 14:24:49.466024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.131 [2024-04-26 14:24:49.466057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.131 [2024-04-26 14:24:49.473657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.131 [2024-04-26 14:24:49.473985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.131 [2024-04-26 14:24:49.474020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.131 [2024-04-26 14:24:49.481923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.131 [2024-04-26 14:24:49.482351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.131 [2024-04-26 14:24:49.482384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.131 [2024-04-26 14:24:49.490524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.131 [2024-04-26 14:24:49.490854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.131 [2024-04-26 14:24:49.490888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.131 [2024-04-26 14:24:49.497577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.131 [2024-04-26 14:24:49.497913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.131 [2024-04-26 14:24:49.497946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.131 [2024-04-26 14:24:49.504643] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.131 [2024-04-26 14:24:49.504971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.131 [2024-04-26 14:24:49.505004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.131 [2024-04-26 14:24:49.511715] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.131 [2024-04-26 14:24:49.512039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.131 [2024-04-26 14:24:49.512073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.131 [2024-04-26 14:24:49.518977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.131 [2024-04-26 14:24:49.519302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.131 [2024-04-26 14:24:49.519336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.131 [2024-04-26 14:24:49.525990] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.131 [2024-04-26 14:24:49.526313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.131 [2024-04-26 14:24:49.526352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.131 [2024-04-26 14:24:49.533093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.131 [2024-04-26 14:24:49.533476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.131 [2024-04-26 14:24:49.533509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.131 [2024-04-26 14:24:49.539761] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.131 [2024-04-26 14:24:49.540088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.131 [2024-04-26 14:24:49.540121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.546627] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.546964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.546997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.552946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.553268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.553303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.560172] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.560495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.560529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.567162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.567482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.567515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.574560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.574893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.574927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.581331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.581724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.581757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.589935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.590328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.590361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.598018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.598342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.598377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.606189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.606517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.606550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.614816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.615202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.615239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.622277] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.622601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.622643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.629370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.629704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.629738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.636466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.636795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.636828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.643523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.643859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.643894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.650422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.650754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.650788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.657441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.657768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.657801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.664299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.664627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.664668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.671332] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.671672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.671705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.677784] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.678205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.678239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.684978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.685301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.685333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.692144] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.692483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.692516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.132 [2024-04-26 14:24:49.699157] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.132 [2024-04-26 14:24:49.699513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.132 [2024-04-26 14:24:49.699547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.391 [2024-04-26 14:24:49.706558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.391 [2024-04-26 14:24:49.706884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.391 [2024-04-26 14:24:49.706917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.391 [2024-04-26 14:24:49.713652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.391 [2024-04-26 14:24:49.713975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.391 [2024-04-26 14:24:49.714015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.391 [2024-04-26 14:24:49.720649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.391 [2024-04-26 14:24:49.720975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.391 [2024-04-26 14:24:49.721008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.391 [2024-04-26 14:24:49.727780] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.391 [2024-04-26 14:24:49.728119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.391 [2024-04-26 14:24:49.728152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.391 [2024-04-26 14:24:49.734947] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.391 [2024-04-26 14:24:49.735277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.391 [2024-04-26 14:24:49.735309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.391 [2024-04-26 14:24:49.741732] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.391 [2024-04-26 14:24:49.742059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.391 [2024-04-26 14:24:49.742092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.391 [2024-04-26 14:24:49.748364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.391 [2024-04-26 14:24:49.748706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.391 [2024-04-26 14:24:49.748739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.391 [2024-04-26 14:24:49.755455] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.391 [2024-04-26 14:24:49.755787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.391 [2024-04-26 14:24:49.755821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.391 [2024-04-26 14:24:49.762260] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.391 [2024-04-26 14:24:49.762582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.391 [2024-04-26 14:24:49.762615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.391 [2024-04-26 14:24:49.768707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.391 [2024-04-26 14:24:49.769031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.391 [2024-04-26 14:24:49.769064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.391 [2024-04-26 14:24:49.775205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.391 [2024-04-26 14:24:49.775533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.391 [2024-04-26 14:24:49.775566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.391 [2024-04-26 14:24:49.782476] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.391 [2024-04-26 14:24:49.782814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.391 [2024-04-26 14:24:49.782849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.391 [2024-04-26 14:24:49.789820] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.391 [2024-04-26 14:24:49.790149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.391 [2024-04-26 14:24:49.790181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.391 [2024-04-26 14:24:49.797057] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.391 [2024-04-26 14:24:49.797382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.391 [2024-04-26 14:24:49.797416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.391 [2024-04-26 14:24:49.803867] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.804190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.804226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.810834] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.811158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.811191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.817248] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.817570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.817603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.824528] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.824866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.824905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.831424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.831762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.831802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.838512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.838842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.838875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.845810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.846134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.846168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.852749] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.853083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.853116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.859291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.859613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.859658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.866322] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.866705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.866738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.873536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.873881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.873915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.880345] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.880682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.880716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.887195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.887527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.887561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.894186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.894511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.894551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.901166] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.901489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.901523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.908268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.908619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.908660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.915411] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.915743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.915776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.922437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.922773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.922813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.930297] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.930709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.930742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.938943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.939359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.939392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.947576] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.948015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.948048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.392 [2024-04-26 14:24:49.956025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.392 [2024-04-26 14:24:49.956444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.392 [2024-04-26 14:24:49.956477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.651 [2024-04-26 14:24:49.964565] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.651 [2024-04-26 14:24:49.964986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.651 [2024-04-26 14:24:49.965021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.651 [2024-04-26 14:24:49.973129] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.651 [2024-04-26 14:24:49.973484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.651 [2024-04-26 14:24:49.973527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.651 [2024-04-26 14:24:49.981113] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.651 [2024-04-26 14:24:49.981473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.651 [2024-04-26 14:24:49.981507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.651 [2024-04-26 14:24:49.989519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.651 [2024-04-26 14:24:49.989878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.651 [2024-04-26 14:24:49.989912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.651 [2024-04-26 14:24:49.997891] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.651 [2024-04-26 14:24:49.998252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.651 [2024-04-26 14:24:49.998285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.651 [2024-04-26 14:24:50.006094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.651 [2024-04-26 14:24:50.006517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.651 [2024-04-26 14:24:50.006562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.651 [2024-04-26 14:24:50.014711] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.651 [2024-04-26 14:24:50.015111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.651 [2024-04-26 14:24:50.015152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.651 [2024-04-26 14:24:50.021534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.651 [2024-04-26 14:24:50.021894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.651 [2024-04-26 14:24:50.021933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.651 [2024-04-26 14:24:50.028316] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.651 [2024-04-26 14:24:50.028677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.651 [2024-04-26 14:24:50.028760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.651 [2024-04-26 14:24:50.036117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.651 [2024-04-26 14:24:50.036480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.651 [2024-04-26 14:24:50.036524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.651 [2024-04-26 14:24:50.042734] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.651 [2024-04-26 14:24:50.043071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.651 [2024-04-26 14:24:50.043107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.651 [2024-04-26 14:24:50.048907] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.651 [2024-04-26 14:24:50.049238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.049273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.055448] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.055782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.055817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.061940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.062265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.062299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.068707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.069031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.069064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.075256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.075576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.075610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.082386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.082735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.082769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.089845] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.090183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.090216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.097208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.097538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.097573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.104591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.104926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.104959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.111792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.112120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.112153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.119155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.119489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.119522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.126500] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.126838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.126871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.133968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.134298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.134332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.141474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.141818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.141852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.148978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.149302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.149338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.156151] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.156498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.156531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.163530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.163862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.163896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.170891] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.171217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.171250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.178322] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.178671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.178704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.185759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.186082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.186116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.193091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.193413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.193448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.200302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.200644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.200676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.207593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.207932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.207966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.652 [2024-04-26 14:24:50.215088] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.652 [2024-04-26 14:24:50.215437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.652 [2024-04-26 14:24:50.215477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.911 [2024-04-26 14:24:50.222657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.911 [2024-04-26 14:24:50.223053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.911 [2024-04-26 14:24:50.223094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.912 [2024-04-26 14:24:50.231166] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.912 [2024-04-26 14:24:50.231558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.912 [2024-04-26 14:24:50.231591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.912 [2024-04-26 14:24:50.239547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.912 [2024-04-26 14:24:50.239927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.912 [2024-04-26 14:24:50.239961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.912 [2024-04-26 14:24:50.247908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.912 [2024-04-26 14:24:50.248263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.912 [2024-04-26 14:24:50.248296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.912 [2024-04-26 14:24:50.256273] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.912 [2024-04-26 14:24:50.256675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.912 [2024-04-26 14:24:50.256708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.912 [2024-04-26 14:24:50.264408] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.912 [2024-04-26 14:24:50.264780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.912 [2024-04-26 14:24:50.264813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.912 [2024-04-26 14:24:50.273021] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.912 [2024-04-26 14:24:50.273407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.912 [2024-04-26 14:24:50.273442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.912 [2024-04-26 14:24:50.281530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.912 [2024-04-26 14:24:50.281987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.912 [2024-04-26 14:24:50.282020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.912 [2024-04-26 14:24:50.290064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.912 [2024-04-26 14:24:50.290464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.912 [2024-04-26 14:24:50.290498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.912 [2024-04-26 14:24:50.299098] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.912 [2024-04-26 14:24:50.299514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.912 [2024-04-26 14:24:50.299547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.912 [2024-04-26 14:24:50.308044] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.912 [2024-04-26 14:24:50.308379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.912 [2024-04-26 14:24:50.308416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.912 [2024-04-26 14:24:50.316534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.912 [2024-04-26 14:24:50.316928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.912 [2024-04-26 14:24:50.316962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.912 [2024-04-26 14:24:50.324487] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.912 [2024-04-26 14:24:50.324838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.912 [2024-04-26 14:24:50.324871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.912 [2024-04-26 14:24:50.331558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.912 [2024-04-26 14:24:50.331887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.912 [2024-04-26 14:24:50.331920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.912 [2024-04-26 14:24:50.339158] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.912 [2024-04-26 14:24:50.339486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.912 [2024-04-26 14:24:50.339520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.912 [2024-04-26 14:24:50.346344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.912 [2024-04-26 14:24:50.346684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.912 [2024-04-26 14:24:50.346718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.912 [2024-04-26 14:24:50.353721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.912 [2024-04-26 14:24:50.354048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.912 [2024-04-26 14:24:50.354084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.912 [2024-04-26 14:24:50.360810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.912 [2024-04-26 14:24:50.361141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.912 [2024-04-26 14:24:50.361175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.912 [2024-04-26 14:24:50.367725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.912 [2024-04-26 14:24:50.368050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.912 [2024-04-26 14:24:50.368084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.912 [2024-04-26 14:24:50.375643] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.913 [2024-04-26 14:24:50.376023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.913 [2024-04-26 14:24:50.376056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.913 [2024-04-26 14:24:50.384168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.913 [2024-04-26 14:24:50.384553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.913 [2024-04-26 14:24:50.384586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.913 [2024-04-26 14:24:50.392084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.913 [2024-04-26 14:24:50.392408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.913 [2024-04-26 14:24:50.392441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.913 [2024-04-26 14:24:50.400428] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.913 [2024-04-26 14:24:50.400834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.913 [2024-04-26 14:24:50.400868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.913 [2024-04-26 14:24:50.408073] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.913 [2024-04-26 14:24:50.408476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.913 [2024-04-26 14:24:50.408509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.913 [2024-04-26 14:24:50.416817] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.913 [2024-04-26 14:24:50.417138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.913 [2024-04-26 14:24:50.417172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.913 [2024-04-26 14:24:50.423890] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.913 [2024-04-26 14:24:50.424217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.913 [2024-04-26 14:24:50.424257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.913 [2024-04-26 14:24:50.430984] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.913 [2024-04-26 14:24:50.431308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.913 [2024-04-26 14:24:50.431341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.913 [2024-04-26 14:24:50.438410] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.913 [2024-04-26 14:24:50.438746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.913 [2024-04-26 14:24:50.438781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.913 [2024-04-26 14:24:50.445654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.913 [2024-04-26 14:24:50.445977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.913 [2024-04-26 14:24:50.446010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:08.913 [2024-04-26 14:24:50.452923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.913 [2024-04-26 14:24:50.453246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.913 [2024-04-26 14:24:50.453278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:08.913 [2024-04-26 14:24:50.460052] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.913 [2024-04-26 14:24:50.460389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.913 [2024-04-26 14:24:50.460422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:08.913 [2024-04-26 14:24:50.467197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.913 [2024-04-26 14:24:50.467524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.913 [2024-04-26 14:24:50.467557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.913 [2024-04-26 14:24:50.474345] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:08.913 [2024-04-26 14:24:50.474682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.913 [2024-04-26 14:24:50.474722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.172 [2024-04-26 14:24:50.481054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.172 [2024-04-26 14:24:50.481383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.172 [2024-04-26 14:24:50.481416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.172 [2024-04-26 14:24:50.488220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.172 [2024-04-26 14:24:50.488557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.172 [2024-04-26 14:24:50.488591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.172 [2024-04-26 14:24:50.495328] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.172 [2024-04-26 14:24:50.495667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.172 [2024-04-26 14:24:50.495707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.172 [2024-04-26 14:24:50.502453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.172 [2024-04-26 14:24:50.502786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.172 [2024-04-26 14:24:50.502820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.172 [2024-04-26 14:24:50.509830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.172 [2024-04-26 14:24:50.510161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.172 [2024-04-26 14:24:50.510194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.172 [2024-04-26 14:24:50.517931] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.172 [2024-04-26 14:24:50.518335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.172 [2024-04-26 14:24:50.518368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.172 [2024-04-26 14:24:50.526032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.172 [2024-04-26 14:24:50.526356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.172 [2024-04-26 14:24:50.526389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.172 [2024-04-26 14:24:50.533134] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.172 [2024-04-26 14:24:50.533470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.172 [2024-04-26 14:24:50.533503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.172 [2024-04-26 14:24:50.539742] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.172 [2024-04-26 14:24:50.540068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.172 [2024-04-26 14:24:50.540102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.172 [2024-04-26 14:24:50.547375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.172 [2024-04-26 14:24:50.547707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.172 [2024-04-26 14:24:50.547741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.172 [2024-04-26 14:24:50.554205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.172 [2024-04-26 14:24:50.554537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.172 [2024-04-26 14:24:50.554571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.561059] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.561382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.561415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.567875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.568201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.568234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.574821] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.575150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.575183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.581362] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.581695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.581728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.589014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.589454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.589487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.597529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.597938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.597971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.606331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.606752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.606785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.615143] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.615603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.615652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.623202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.623566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.623599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.630527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.630861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.630894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.637387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.637724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.637757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.644409] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.644748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.644781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.651449] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.651785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.651819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.658861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.659188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.659221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.665640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.665982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.666015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.673707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.674044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.674080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.682030] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.682432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.682466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.690789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.691229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.691263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.699623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.700095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.700128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.708132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.708611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.708652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.716932] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.717303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.717336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:09.173 [2024-04-26 14:24:50.725682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9996a0) with pdu=0x2000190fef90 00:20:09.173 [2024-04-26 14:24:50.726132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.173 [2024-04-26 14:24:50.726164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:09.173 00:20:09.173 Latency(us) 00:20:09.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.174 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:09.174 nvme0n1 : 2.00 4060.95 507.62 0.00 0.00 3929.55 2148.12 9757.58 00:20:09.174 =================================================================================================================== 00:20:09.174 Total : 4060.95 507.62 0.00 0.00 3929.55 2148.12 9757.58 00:20:09.174 0 00:20:09.431 14:24:50 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:09.431 14:24:50 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:09.431 14:24:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:09.431 14:24:50 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:09.431 | .driver_specific 00:20:09.431 | .nvme_error 00:20:09.431 | .status_code 00:20:09.431 | .command_transient_transport_error' 00:20:09.690 14:24:51 -- host/digest.sh@71 -- # (( 262 > 0 )) 00:20:09.690 14:24:51 -- host/digest.sh@73 -- # killprocess 3208453 00:20:09.690 14:24:51 -- common/autotest_common.sh@936 -- # '[' -z 3208453 ']' 00:20:09.690 14:24:51 -- common/autotest_common.sh@940 -- # kill -0 3208453 00:20:09.690 14:24:51 -- common/autotest_common.sh@941 -- # uname 00:20:09.690 14:24:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:09.690 14:24:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3208453 00:20:09.690 14:24:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:09.690 14:24:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:09.690 14:24:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3208453' 00:20:09.690 killing process with pid 3208453 00:20:09.690 14:24:51 -- common/autotest_common.sh@955 -- # kill 3208453 00:20:09.690 Received shutdown signal, test time was about 2.000000 seconds 00:20:09.690 00:20:09.690 Latency(us) 00:20:09.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.690 =================================================================================================================== 00:20:09.690 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.690 14:24:51 -- common/autotest_common.sh@960 -- # wait 3208453 00:20:09.948 14:24:51 -- host/digest.sh@116 -- # killprocess 3207398 00:20:09.948 14:24:51 -- common/autotest_common.sh@936 -- # '[' -z 3207398 ']' 00:20:09.948 14:24:51 -- common/autotest_common.sh@940 -- # kill -0 3207398 00:20:09.948 14:24:51 -- common/autotest_common.sh@941 -- # uname 00:20:09.948 14:24:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:09.948 14:24:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3207398 00:20:09.948 14:24:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:09.948 14:24:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:09.948 14:24:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3207398' 00:20:09.948 killing process with pid 3207398 00:20:09.948 14:24:51 -- common/autotest_common.sh@955 -- # kill 3207398 00:20:09.948 14:24:51 -- common/autotest_common.sh@960 -- # wait 3207398 00:20:10.208 00:20:10.208 real 0m15.677s 00:20:10.208 user 0m31.868s 00:20:10.208 sys 0m4.067s 00:20:10.208 14:24:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:10.208 14:24:51 -- common/autotest_common.sh@10 -- # set +x 00:20:10.208 ************************************ 00:20:10.208 END TEST nvmf_digest_error 00:20:10.208 ************************************ 00:20:10.208 14:24:51 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:10.208 14:24:51 -- host/digest.sh@150 -- # nvmftestfini 00:20:10.208 14:24:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:10.208 14:24:51 -- nvmf/common.sh@117 -- # sync 00:20:10.208 14:24:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:10.208 14:24:51 -- nvmf/common.sh@120 -- # set +e 00:20:10.208 14:24:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:10.208 14:24:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:10.208 rmmod nvme_tcp 00:20:10.208 rmmod nvme_fabrics 00:20:10.208 rmmod nvme_keyring 00:20:10.208 14:24:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:10.208 14:24:51 -- nvmf/common.sh@124 -- # set -e 00:20:10.208 14:24:51 -- nvmf/common.sh@125 -- # return 0 00:20:10.208 14:24:51 -- nvmf/common.sh@478 -- # '[' -n 3207398 ']' 00:20:10.208 14:24:51 -- nvmf/common.sh@479 -- # killprocess 3207398 00:20:10.208 14:24:51 -- common/autotest_common.sh@936 -- # '[' -z 3207398 ']' 00:20:10.208 14:24:51 -- common/autotest_common.sh@940 -- # kill -0 3207398 00:20:10.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3207398) - No such process 00:20:10.208 14:24:51 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3207398 is not found' 00:20:10.208 Process with pid 3207398 is not found 00:20:10.208 14:24:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:10.208 14:24:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:10.208 14:24:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:10.208 14:24:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:10.208 14:24:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:10.208 14:24:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.208 14:24:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.208 14:24:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.115 14:24:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:12.115 00:20:12.115 real 0m35.693s 00:20:12.115 user 1m4.527s 00:20:12.115 sys 0m9.505s 00:20:12.115 14:24:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:12.115 14:24:53 -- common/autotest_common.sh@10 -- # set +x 00:20:12.115 ************************************ 00:20:12.115 END TEST nvmf_digest 00:20:12.115 ************************************ 00:20:12.401 14:24:53 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:20:12.401 14:24:53 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:20:12.401 14:24:53 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:20:12.401 14:24:53 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:20:12.401 14:24:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:12.401 14:24:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:12.401 14:24:53 -- common/autotest_common.sh@10 -- # set +x 00:20:12.401 ************************************ 00:20:12.401 START TEST nvmf_bdevperf 00:20:12.401 ************************************ 00:20:12.401 14:24:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:20:12.401 * Looking for test storage... 00:20:12.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:12.401 14:24:53 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.401 14:24:53 -- nvmf/common.sh@7 -- # uname -s 00:20:12.401 14:24:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.401 14:24:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.401 14:24:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.401 14:24:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.401 14:24:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.401 14:24:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.401 14:24:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.401 14:24:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.401 14:24:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.401 14:24:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.401 14:24:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:12.401 14:24:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:12.401 14:24:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.401 14:24:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.401 14:24:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.401 14:24:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.401 14:24:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.401 14:24:53 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.401 14:24:53 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.401 14:24:53 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.401 14:24:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.401 14:24:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.401 14:24:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.401 14:24:53 -- paths/export.sh@5 -- # export PATH 00:20:12.401 14:24:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.401 14:24:53 -- nvmf/common.sh@47 -- # : 0 00:20:12.401 14:24:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:12.401 14:24:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:12.401 14:24:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.401 14:24:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.401 14:24:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.401 14:24:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:12.401 14:24:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:12.401 14:24:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:12.401 14:24:53 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:12.401 14:24:53 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:12.401 14:24:53 -- host/bdevperf.sh@24 -- # nvmftestinit 00:20:12.401 14:24:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:12.401 14:24:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.401 14:24:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:12.401 14:24:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:12.401 14:24:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:12.401 14:24:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.401 14:24:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.401 14:24:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.401 14:24:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:12.401 14:24:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:12.401 14:24:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:12.401 14:24:53 -- common/autotest_common.sh@10 -- # set +x 00:20:14.324 14:24:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:14.324 14:24:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:14.324 14:24:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:14.324 14:24:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:14.324 14:24:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:14.324 14:24:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:14.324 14:24:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:14.324 14:24:55 -- nvmf/common.sh@295 -- # net_devs=() 00:20:14.324 14:24:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:14.324 14:24:55 -- nvmf/common.sh@296 -- # e810=() 00:20:14.324 14:24:55 -- nvmf/common.sh@296 -- # local -ga e810 00:20:14.324 14:24:55 -- nvmf/common.sh@297 -- # x722=() 00:20:14.324 14:24:55 -- nvmf/common.sh@297 -- # local -ga x722 00:20:14.324 14:24:55 -- nvmf/common.sh@298 -- # mlx=() 00:20:14.324 14:24:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:14.324 14:24:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.324 14:24:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.324 14:24:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.324 14:24:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.324 14:24:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.324 14:24:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.324 14:24:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.324 14:24:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.324 14:24:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.324 14:24:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.324 14:24:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.324 14:24:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:14.324 14:24:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:14.324 14:24:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:14.324 14:24:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:14.324 14:24:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:14.324 14:24:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:14.324 14:24:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.324 14:24:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:14.324 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:14.324 14:24:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.324 14:24:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.324 14:24:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.324 14:24:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.324 14:24:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.324 14:24:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.324 14:24:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:14.324 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:14.325 14:24:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.325 14:24:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.325 14:24:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.325 14:24:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.325 14:24:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.325 14:24:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:14.325 14:24:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:14.325 14:24:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:14.325 14:24:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.325 14:24:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.325 14:24:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:14.325 14:24:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.325 14:24:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:14.325 Found net devices under 0000:08:00.0: cvl_0_0 00:20:14.325 14:24:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.325 14:24:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.325 14:24:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.325 14:24:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:14.325 14:24:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.325 14:24:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:14.325 Found net devices under 0000:08:00.1: cvl_0_1 00:20:14.325 14:24:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.325 14:24:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:14.325 14:24:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:14.325 14:24:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:14.325 14:24:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:14.325 14:24:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:14.325 14:24:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.325 14:24:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.325 14:24:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.325 14:24:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:14.325 14:24:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.325 14:24:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.325 14:24:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:14.325 14:24:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.325 14:24:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.325 14:24:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:14.325 14:24:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:14.325 14:24:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.325 14:24:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.325 14:24:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.325 14:24:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.325 14:24:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:14.325 14:24:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.325 14:24:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.325 14:24:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.325 14:24:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:14.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:20:14.325 00:20:14.325 --- 10.0.0.2 ping statistics --- 00:20:14.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.325 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:20:14.325 14:24:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:20:14.325 00:20:14.325 --- 10.0.0.1 ping statistics --- 00:20:14.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.325 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:20:14.325 14:24:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.325 14:24:55 -- nvmf/common.sh@411 -- # return 0 00:20:14.325 14:24:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:14.325 14:24:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.325 14:24:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:14.325 14:24:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:14.325 14:24:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.325 14:24:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:14.325 14:24:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:14.325 14:24:55 -- host/bdevperf.sh@25 -- # tgt_init 00:20:14.325 14:24:55 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:20:14.325 14:24:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:14.325 14:24:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:14.325 14:24:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.325 14:24:55 -- nvmf/common.sh@470 -- # nvmfpid=3210369 00:20:14.325 14:24:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:14.325 14:24:55 -- nvmf/common.sh@471 -- # waitforlisten 3210369 00:20:14.325 14:24:55 -- common/autotest_common.sh@817 -- # '[' -z 3210369 ']' 00:20:14.325 14:24:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.325 14:24:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:14.325 14:24:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.325 14:24:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:14.325 14:24:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.325 [2024-04-26 14:24:55.631879] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:20:14.325 [2024-04-26 14:24:55.631979] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.325 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.325 [2024-04-26 14:24:55.698155] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:14.325 [2024-04-26 14:24:55.816868] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.325 [2024-04-26 14:24:55.816931] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.325 [2024-04-26 14:24:55.816948] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.325 [2024-04-26 14:24:55.816962] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.325 [2024-04-26 14:24:55.816974] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.325 [2024-04-26 14:24:55.817070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.325 [2024-04-26 14:24:55.817162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.325 [2024-04-26 14:24:55.817166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.584 14:24:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:14.584 14:24:55 -- common/autotest_common.sh@850 -- # return 0 00:20:14.584 14:24:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:14.584 14:24:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:14.584 14:24:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.584 14:24:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.584 14:24:55 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:14.584 14:24:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.584 14:24:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.584 [2024-04-26 14:24:55.954525] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.584 14:24:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.584 14:24:55 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:14.584 14:24:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.584 14:24:55 -- common/autotest_common.sh@10 -- # set +x 00:20:14.584 Malloc0 00:20:14.584 14:24:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.584 14:24:56 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:14.584 14:24:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.584 14:24:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.584 14:24:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.584 14:24:56 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:14.584 14:24:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.584 14:24:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.584 14:24:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.584 14:24:56 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:14.584 14:24:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:14.584 14:24:56 -- common/autotest_common.sh@10 -- # set +x 00:20:14.584 [2024-04-26 14:24:56.022117] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.584 14:24:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:14.584 14:24:56 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:20:14.584 14:24:56 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:20:14.584 14:24:56 -- nvmf/common.sh@521 -- # config=() 00:20:14.584 14:24:56 -- nvmf/common.sh@521 -- # local subsystem config 00:20:14.584 14:24:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:14.584 14:24:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:14.584 { 00:20:14.584 "params": { 00:20:14.584 "name": "Nvme$subsystem", 00:20:14.584 "trtype": "$TEST_TRANSPORT", 00:20:14.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.584 "adrfam": "ipv4", 00:20:14.584 "trsvcid": "$NVMF_PORT", 00:20:14.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.584 "hdgst": ${hdgst:-false}, 00:20:14.584 "ddgst": ${ddgst:-false} 00:20:14.584 }, 00:20:14.584 "method": "bdev_nvme_attach_controller" 00:20:14.584 } 00:20:14.584 EOF 00:20:14.584 )") 00:20:14.584 14:24:56 -- nvmf/common.sh@543 -- # cat 00:20:14.584 14:24:56 -- nvmf/common.sh@545 -- # jq . 00:20:14.584 14:24:56 -- nvmf/common.sh@546 -- # IFS=, 00:20:14.584 14:24:56 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:14.584 "params": { 00:20:14.584 "name": "Nvme1", 00:20:14.584 "trtype": "tcp", 00:20:14.584 "traddr": "10.0.0.2", 00:20:14.584 "adrfam": "ipv4", 00:20:14.584 "trsvcid": "4420", 00:20:14.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.584 "hdgst": false, 00:20:14.584 "ddgst": false 00:20:14.584 }, 00:20:14.584 "method": "bdev_nvme_attach_controller" 00:20:14.584 }' 00:20:14.584 [2024-04-26 14:24:56.072758] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:20:14.584 [2024-04-26 14:24:56.072846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210392 ] 00:20:14.584 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.584 [2024-04-26 14:24:56.133532] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.843 [2024-04-26 14:24:56.252316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.101 Running I/O for 1 seconds... 00:20:16.036 00:20:16.036 Latency(us) 00:20:16.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.036 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:16.036 Verification LBA range: start 0x0 length 0x4000 00:20:16.036 Nvme1n1 : 1.02 7479.14 29.22 0.00 0.00 17038.63 3349.62 16990.81 00:20:16.036 =================================================================================================================== 00:20:16.036 Total : 7479.14 29.22 0.00 0.00 17038.63 3349.62 16990.81 00:20:16.294 14:24:57 -- host/bdevperf.sh@30 -- # bdevperfpid=3210591 00:20:16.294 14:24:57 -- host/bdevperf.sh@32 -- # sleep 3 00:20:16.294 14:24:57 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:20:16.294 14:24:57 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:20:16.294 14:24:57 -- nvmf/common.sh@521 -- # config=() 00:20:16.294 14:24:57 -- nvmf/common.sh@521 -- # local subsystem config 00:20:16.294 14:24:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:16.294 14:24:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:16.294 { 00:20:16.294 "params": { 00:20:16.294 "name": "Nvme$subsystem", 00:20:16.294 "trtype": "$TEST_TRANSPORT", 00:20:16.294 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:16.294 "adrfam": "ipv4", 00:20:16.294 "trsvcid": "$NVMF_PORT", 00:20:16.294 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:16.294 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:16.294 "hdgst": ${hdgst:-false}, 00:20:16.294 "ddgst": ${ddgst:-false} 00:20:16.294 }, 00:20:16.294 "method": "bdev_nvme_attach_controller" 00:20:16.294 } 00:20:16.294 EOF 00:20:16.294 )") 00:20:16.294 14:24:57 -- nvmf/common.sh@543 -- # cat 00:20:16.294 14:24:57 -- nvmf/common.sh@545 -- # jq . 00:20:16.294 14:24:57 -- nvmf/common.sh@546 -- # IFS=, 00:20:16.294 14:24:57 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:16.294 "params": { 00:20:16.294 "name": "Nvme1", 00:20:16.294 "trtype": "tcp", 00:20:16.294 "traddr": "10.0.0.2", 00:20:16.294 "adrfam": "ipv4", 00:20:16.294 "trsvcid": "4420", 00:20:16.294 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.294 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.294 "hdgst": false, 00:20:16.294 "ddgst": false 00:20:16.294 }, 00:20:16.294 "method": "bdev_nvme_attach_controller" 00:20:16.294 }' 00:20:16.294 [2024-04-26 14:24:57.745578] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:20:16.294 [2024-04-26 14:24:57.745684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210591 ] 00:20:16.294 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.294 [2024-04-26 14:24:57.806024] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.553 [2024-04-26 14:24:57.923232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.811 Running I/O for 15 seconds... 00:20:19.347 14:25:00 -- host/bdevperf.sh@33 -- # kill -9 3210369 00:20:19.347 14:25:00 -- host/bdevperf.sh@35 -- # sleep 3 00:20:19.347 [2024-04-26 14:25:00.712375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.347 [2024-04-26 14:25:00.712431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.347 [2024-04-26 14:25:00.712466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.347 [2024-04-26 14:25:00.712485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.347 [2024-04-26 14:25:00.712506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.347 [2024-04-26 14:25:00.712525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.347 [2024-04-26 14:25:00.712545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.347 [2024-04-26 14:25:00.712564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.347 [2024-04-26 14:25:00.712583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.347 [2024-04-26 14:25:00.712600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.347 [2024-04-26 14:25:00.712621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.347 [2024-04-26 14:25:00.712646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.347 [2024-04-26 14:25:00.712684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.347 [2024-04-26 14:25:00.712702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.347 [2024-04-26 14:25:00.712723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.347 [2024-04-26 14:25:00.712741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.347 [2024-04-26 14:25:00.712760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.347 [2024-04-26 14:25:00.712778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.347 [2024-04-26 14:25:00.712797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.347 [2024-04-26 14:25:00.712816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.347 [2024-04-26 14:25:00.712835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.347 [2024-04-26 14:25:00.712855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.347 [2024-04-26 14:25:00.712874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.712891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.712913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.712930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.712950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.712965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.712983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.712999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.713981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.713996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.714018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.714034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.714058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.714074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.714092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.714107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.714125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.714140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.714157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.714173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.714190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.348 [2024-04-26 14:25:00.714206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.348 [2024-04-26 14:25:00.714224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.714972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.714989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.715004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.715022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.715038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.715055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.715071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.715089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.715104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.715121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.715136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.715154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.715169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.715187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.349 [2024-04-26 14:25:00.715202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.715220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.349 [2024-04-26 14:25:00.715235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.715252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.715267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.715285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.715301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.715322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.715338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.715355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.715371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.715388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.715403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.715420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.715436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.715453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.715468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.715485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.715501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.715518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.349 [2024-04-26 14:25:00.715534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.349 [2024-04-26 14:25:00.715551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.715566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.715583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.715598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.715616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.715638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.715657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.715680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.715698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.715714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.715731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.715750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.715769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.715785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.715802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.715818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.715835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.715851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.715868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.715884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.715902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.715917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.715934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.715949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.715967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.715983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.716015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.716048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.716081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.716119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.716151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.716189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.716222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.716255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.716287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.350 [2024-04-26 14:25:00.716320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.350 [2024-04-26 14:25:00.716355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.350 [2024-04-26 14:25:00.716388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.350 [2024-04-26 14:25:00.716421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.350 [2024-04-26 14:25:00.716453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.350 [2024-04-26 14:25:00.716486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.350 [2024-04-26 14:25:00.716518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.350 [2024-04-26 14:25:00.716551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.350 [2024-04-26 14:25:00.716584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.350 [2024-04-26 14:25:00.716621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.350 [2024-04-26 14:25:00.716668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.350 [2024-04-26 14:25:00.716706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.350 [2024-04-26 14:25:00.716748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.350 [2024-04-26 14:25:00.716783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.350 [2024-04-26 14:25:00.716816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.350 [2024-04-26 14:25:00.716848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f0860 is same with the state(5) to be set 00:20:19.350 [2024-04-26 14:25:00.716883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:19.350 [2024-04-26 14:25:00.716901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:19.350 [2024-04-26 14:25:00.716914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:8 PRP1 0x0 PRP2 0x0 00:20:19.350 [2024-04-26 14:25:00.716928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.350 [2024-04-26 14:25:00.716986] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16f0860 was disconnected and freed. reset controller. 00:20:19.351 [2024-04-26 14:25:00.721205] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.351 [2024-04-26 14:25:00.721277] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.351 [2024-04-26 14:25:00.722126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.351 [2024-04-26 14:25:00.722415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.351 [2024-04-26 14:25:00.722442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.351 [2024-04-26 14:25:00.722459] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.351 [2024-04-26 14:25:00.722747] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.351 [2024-04-26 14:25:00.723019] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.351 [2024-04-26 14:25:00.723040] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.351 [2024-04-26 14:25:00.723057] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.351 [2024-04-26 14:25:00.727070] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.351 [2024-04-26 14:25:00.736058] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.351 [2024-04-26 14:25:00.736609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.351 [2024-04-26 14:25:00.736869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.351 [2024-04-26 14:25:00.736951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.351 [2024-04-26 14:25:00.736970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.351 [2024-04-26 14:25:00.737238] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.351 [2024-04-26 14:25:00.737502] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.351 [2024-04-26 14:25:00.737525] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.351 [2024-04-26 14:25:00.737540] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.351 [2024-04-26 14:25:00.741564] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.351 [2024-04-26 14:25:00.750379] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.351 [2024-04-26 14:25:00.750930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.351 [2024-04-26 14:25:00.751228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.351 [2024-04-26 14:25:00.751279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.351 [2024-04-26 14:25:00.751297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.351 [2024-04-26 14:25:00.751564] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.351 [2024-04-26 14:25:00.751844] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.351 [2024-04-26 14:25:00.751869] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.351 [2024-04-26 14:25:00.751884] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.351 [2024-04-26 14:25:00.755911] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.351 [2024-04-26 14:25:00.764889] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.351 [2024-04-26 14:25:00.765362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.351 [2024-04-26 14:25:00.765623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.351 [2024-04-26 14:25:00.765698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.351 [2024-04-26 14:25:00.765716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.351 [2024-04-26 14:25:00.765979] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.351 [2024-04-26 14:25:00.766242] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.351 [2024-04-26 14:25:00.766272] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.351 [2024-04-26 14:25:00.766287] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.351 [2024-04-26 14:25:00.770324] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.351 [2024-04-26 14:25:00.779332] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.351 [2024-04-26 14:25:00.779880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.351 [2024-04-26 14:25:00.780117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.351 [2024-04-26 14:25:00.780168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.351 [2024-04-26 14:25:00.780187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.351 [2024-04-26 14:25:00.780456] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.351 [2024-04-26 14:25:00.780738] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.351 [2024-04-26 14:25:00.780762] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.351 [2024-04-26 14:25:00.780777] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.351 [2024-04-26 14:25:00.784854] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.351 [2024-04-26 14:25:00.793715] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.351 [2024-04-26 14:25:00.794204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.351 [2024-04-26 14:25:00.794475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.351 [2024-04-26 14:25:00.794502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.351 [2024-04-26 14:25:00.794520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.351 [2024-04-26 14:25:00.794792] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.351 [2024-04-26 14:25:00.795056] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.351 [2024-04-26 14:25:00.795079] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.351 [2024-04-26 14:25:00.795094] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.351 [2024-04-26 14:25:00.799120] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.351 [2024-04-26 14:25:00.808145] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.351 [2024-04-26 14:25:00.808668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.351 [2024-04-26 14:25:00.808936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.351 [2024-04-26 14:25:00.808985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.351 [2024-04-26 14:25:00.809002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.351 [2024-04-26 14:25:00.809263] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.351 [2024-04-26 14:25:00.809532] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.351 [2024-04-26 14:25:00.809554] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.351 [2024-04-26 14:25:00.809578] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.351 [2024-04-26 14:25:00.813637] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.351 [2024-04-26 14:25:00.822725] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.351 [2024-04-26 14:25:00.823212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.351 [2024-04-26 14:25:00.823451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.351 [2024-04-26 14:25:00.823497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.351 [2024-04-26 14:25:00.823515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.351 [2024-04-26 14:25:00.823788] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.351 [2024-04-26 14:25:00.824054] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.351 [2024-04-26 14:25:00.824079] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.351 [2024-04-26 14:25:00.824094] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.351 [2024-04-26 14:25:00.828133] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.351 [2024-04-26 14:25:00.837143] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.351 [2024-04-26 14:25:00.837688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.351 [2024-04-26 14:25:00.837995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.351 [2024-04-26 14:25:00.838037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.351 [2024-04-26 14:25:00.838057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.351 [2024-04-26 14:25:00.838325] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.351 [2024-04-26 14:25:00.838589] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.351 [2024-04-26 14:25:00.838612] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.351 [2024-04-26 14:25:00.838627] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.351 [2024-04-26 14:25:00.842688] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.351 [2024-04-26 14:25:00.851519] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.351 [2024-04-26 14:25:00.852128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.351 [2024-04-26 14:25:00.852359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.352 [2024-04-26 14:25:00.852418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.352 [2024-04-26 14:25:00.852459] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.352 [2024-04-26 14:25:00.852743] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.352 [2024-04-26 14:25:00.853010] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.352 [2024-04-26 14:25:00.853033] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.352 [2024-04-26 14:25:00.853049] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.352 [2024-04-26 14:25:00.857113] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.352 [2024-04-26 14:25:00.865886] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.352 [2024-04-26 14:25:00.866435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.352 [2024-04-26 14:25:00.866658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.352 [2024-04-26 14:25:00.866687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.352 [2024-04-26 14:25:00.866705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.352 [2024-04-26 14:25:00.866974] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.352 [2024-04-26 14:25:00.867241] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.352 [2024-04-26 14:25:00.867265] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.352 [2024-04-26 14:25:00.867280] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.352 [2024-04-26 14:25:00.871304] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.352 [2024-04-26 14:25:00.880293] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.352 [2024-04-26 14:25:00.880801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.352 [2024-04-26 14:25:00.881083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.352 [2024-04-26 14:25:00.881109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.352 [2024-04-26 14:25:00.881127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.352 [2024-04-26 14:25:00.881388] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.352 [2024-04-26 14:25:00.881661] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.352 [2024-04-26 14:25:00.881687] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.352 [2024-04-26 14:25:00.881702] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.352 [2024-04-26 14:25:00.885708] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.352 [2024-04-26 14:25:00.894691] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.352 [2024-04-26 14:25:00.895171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.352 [2024-04-26 14:25:00.895419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.352 [2024-04-26 14:25:00.895450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.352 [2024-04-26 14:25:00.895467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.352 [2024-04-26 14:25:00.895747] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.352 [2024-04-26 14:25:00.896012] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.352 [2024-04-26 14:25:00.896035] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.352 [2024-04-26 14:25:00.896050] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.352 [2024-04-26 14:25:00.900057] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.352 [2024-04-26 14:25:00.909112] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.352 [2024-04-26 14:25:00.909604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.352 [2024-04-26 14:25:00.909869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.352 [2024-04-26 14:25:00.909911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.352 [2024-04-26 14:25:00.909930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.352 [2024-04-26 14:25:00.910204] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.352 [2024-04-26 14:25:00.910470] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.352 [2024-04-26 14:25:00.910493] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.352 [2024-04-26 14:25:00.910508] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.612 [2024-04-26 14:25:00.914692] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.612 [2024-04-26 14:25:00.923482] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.612 [2024-04-26 14:25:00.923988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.612 [2024-04-26 14:25:00.924170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.612 [2024-04-26 14:25:00.924218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.612 [2024-04-26 14:25:00.924246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.612 [2024-04-26 14:25:00.924521] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.612 [2024-04-26 14:25:00.924799] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.612 [2024-04-26 14:25:00.924823] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.612 [2024-04-26 14:25:00.924838] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.612 [2024-04-26 14:25:00.928865] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.612 [2024-04-26 14:25:00.937885] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.612 [2024-04-26 14:25:00.938445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.612 [2024-04-26 14:25:00.938661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.612 [2024-04-26 14:25:00.938699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.612 [2024-04-26 14:25:00.938717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.612 [2024-04-26 14:25:00.938985] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.612 [2024-04-26 14:25:00.939251] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.612 [2024-04-26 14:25:00.939273] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.612 [2024-04-26 14:25:00.939288] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.612 [2024-04-26 14:25:00.943291] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.612 [2024-04-26 14:25:00.952269] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.612 [2024-04-26 14:25:00.952709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.612 [2024-04-26 14:25:00.952919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.612 [2024-04-26 14:25:00.952970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.612 [2024-04-26 14:25:00.952988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.612 [2024-04-26 14:25:00.953249] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.612 [2024-04-26 14:25:00.953514] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.612 [2024-04-26 14:25:00.953537] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.612 [2024-04-26 14:25:00.953552] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.612 [2024-04-26 14:25:00.957578] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.612 [2024-04-26 14:25:00.966568] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.612 [2024-04-26 14:25:00.967035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.612 [2024-04-26 14:25:00.967310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.612 [2024-04-26 14:25:00.967362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.612 [2024-04-26 14:25:00.967380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.612 [2024-04-26 14:25:00.967667] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.612 [2024-04-26 14:25:00.967957] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.612 [2024-04-26 14:25:00.967982] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.612 [2024-04-26 14:25:00.967997] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.612 [2024-04-26 14:25:00.972113] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.612 [2024-04-26 14:25:00.980967] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.612 [2024-04-26 14:25:00.981438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.612 [2024-04-26 14:25:00.981600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.612 [2024-04-26 14:25:00.981639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.612 [2024-04-26 14:25:00.981660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.612 [2024-04-26 14:25:00.981929] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.612 [2024-04-26 14:25:00.982196] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.612 [2024-04-26 14:25:00.982218] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.612 [2024-04-26 14:25:00.982233] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.612 [2024-04-26 14:25:00.986269] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.612 [2024-04-26 14:25:00.995479] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.612 [2024-04-26 14:25:00.995996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.612 [2024-04-26 14:25:00.996316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.612 [2024-04-26 14:25:00.996349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.612 [2024-04-26 14:25:00.996367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.612 [2024-04-26 14:25:00.996628] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.612 [2024-04-26 14:25:00.996903] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.612 [2024-04-26 14:25:00.996926] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.612 [2024-04-26 14:25:00.996941] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.612 [2024-04-26 14:25:01.000961] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.612 [2024-04-26 14:25:01.010015] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.612 [2024-04-26 14:25:01.010450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.612 [2024-04-26 14:25:01.010698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.612 [2024-04-26 14:25:01.010740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.612 [2024-04-26 14:25:01.010760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.612 [2024-04-26 14:25:01.011027] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.612 [2024-04-26 14:25:01.011291] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.612 [2024-04-26 14:25:01.011314] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.612 [2024-04-26 14:25:01.011329] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.612 [2024-04-26 14:25:01.015385] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.612 [2024-04-26 14:25:01.024455] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.612 [2024-04-26 14:25:01.024901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.612 [2024-04-26 14:25:01.025150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.612 [2024-04-26 14:25:01.025191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.612 [2024-04-26 14:25:01.025208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.612 [2024-04-26 14:25:01.025470] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.612 [2024-04-26 14:25:01.025745] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.612 [2024-04-26 14:25:01.025769] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.613 [2024-04-26 14:25:01.025785] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.613 [2024-04-26 14:25:01.029790] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.613 [2024-04-26 14:25:01.038769] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.613 [2024-04-26 14:25:01.039349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.613 [2024-04-26 14:25:01.039579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.613 [2024-04-26 14:25:01.039626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.613 [2024-04-26 14:25:01.039662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.613 [2024-04-26 14:25:01.039931] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.613 [2024-04-26 14:25:01.040197] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.613 [2024-04-26 14:25:01.040220] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.613 [2024-04-26 14:25:01.040235] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.613 [2024-04-26 14:25:01.044237] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.613 [2024-04-26 14:25:01.053204] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.613 [2024-04-26 14:25:01.053694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.613 [2024-04-26 14:25:01.053957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.613 [2024-04-26 14:25:01.054002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.613 [2024-04-26 14:25:01.054021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.613 [2024-04-26 14:25:01.054289] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.613 [2024-04-26 14:25:01.054555] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.613 [2024-04-26 14:25:01.054577] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.613 [2024-04-26 14:25:01.054593] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.613 [2024-04-26 14:25:01.058601] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.613 [2024-04-26 14:25:01.067573] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.613 [2024-04-26 14:25:01.068054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.613 [2024-04-26 14:25:01.068273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.613 [2024-04-26 14:25:01.068313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.613 [2024-04-26 14:25:01.068332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.613 [2024-04-26 14:25:01.068600] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.613 [2024-04-26 14:25:01.068875] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.613 [2024-04-26 14:25:01.068899] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.613 [2024-04-26 14:25:01.068914] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.613 [2024-04-26 14:25:01.072912] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.613 [2024-04-26 14:25:01.081955] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.613 [2024-04-26 14:25:01.082437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.613 [2024-04-26 14:25:01.082730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.613 [2024-04-26 14:25:01.082760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.613 [2024-04-26 14:25:01.082784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.613 [2024-04-26 14:25:01.083053] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.613 [2024-04-26 14:25:01.083316] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.613 [2024-04-26 14:25:01.083339] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.613 [2024-04-26 14:25:01.083354] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.613 [2024-04-26 14:25:01.087405] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.613 [2024-04-26 14:25:01.096467] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.613 [2024-04-26 14:25:01.096962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.613 [2024-04-26 14:25:01.097173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.613 [2024-04-26 14:25:01.097224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.613 [2024-04-26 14:25:01.097243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.613 [2024-04-26 14:25:01.097511] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.613 [2024-04-26 14:25:01.097792] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.613 [2024-04-26 14:25:01.097816] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.613 [2024-04-26 14:25:01.097831] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.613 [2024-04-26 14:25:01.101840] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.613 [2024-04-26 14:25:01.110871] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.613 [2024-04-26 14:25:01.111355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.613 [2024-04-26 14:25:01.111580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.613 [2024-04-26 14:25:01.111623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.613 [2024-04-26 14:25:01.111654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.613 [2024-04-26 14:25:01.111923] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.613 [2024-04-26 14:25:01.112190] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.613 [2024-04-26 14:25:01.112215] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.613 [2024-04-26 14:25:01.112230] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.613 [2024-04-26 14:25:01.116264] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.613 [2024-04-26 14:25:01.125335] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.613 [2024-04-26 14:25:01.125889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.613 [2024-04-26 14:25:01.126142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.613 [2024-04-26 14:25:01.126199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.613 [2024-04-26 14:25:01.126216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.613 [2024-04-26 14:25:01.126478] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.613 [2024-04-26 14:25:01.126760] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.613 [2024-04-26 14:25:01.126786] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.613 [2024-04-26 14:25:01.126801] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.613 [2024-04-26 14:25:01.130844] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.613 [2024-04-26 14:25:01.139865] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.613 [2024-04-26 14:25:01.140357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.613 [2024-04-26 14:25:01.140610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.613 [2024-04-26 14:25:01.140646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.613 [2024-04-26 14:25:01.140700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.613 [2024-04-26 14:25:01.140994] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.613 [2024-04-26 14:25:01.141259] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.613 [2024-04-26 14:25:01.141282] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.613 [2024-04-26 14:25:01.141297] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.613 [2024-04-26 14:25:01.145296] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.613 [2024-04-26 14:25:01.154330] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.613 [2024-04-26 14:25:01.154848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.613 [2024-04-26 14:25:01.155121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.613 [2024-04-26 14:25:01.155174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.613 [2024-04-26 14:25:01.155192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.613 [2024-04-26 14:25:01.155460] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.613 [2024-04-26 14:25:01.155743] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.613 [2024-04-26 14:25:01.155767] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.613 [2024-04-26 14:25:01.155783] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.613 [2024-04-26 14:25:01.159824] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.613 [2024-04-26 14:25:01.168695] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.613 [2024-04-26 14:25:01.169173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.613 [2024-04-26 14:25:01.169439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.614 [2024-04-26 14:25:01.169493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.614 [2024-04-26 14:25:01.169511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.614 [2024-04-26 14:25:01.169796] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.614 [2024-04-26 14:25:01.170062] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.614 [2024-04-26 14:25:01.170090] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.614 [2024-04-26 14:25:01.170106] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.614 [2024-04-26 14:25:01.174158] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.872 [2024-04-26 14:25:01.183153] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.873 [2024-04-26 14:25:01.183662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.183933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.183976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.873 [2024-04-26 14:25:01.183995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.873 [2024-04-26 14:25:01.184286] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.873 [2024-04-26 14:25:01.184553] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.873 [2024-04-26 14:25:01.184577] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.873 [2024-04-26 14:25:01.184592] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.873 [2024-04-26 14:25:01.188664] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.873 [2024-04-26 14:25:01.197709] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.873 [2024-04-26 14:25:01.198160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.198366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.198411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.873 [2024-04-26 14:25:01.198432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.873 [2024-04-26 14:25:01.198730] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.873 [2024-04-26 14:25:01.199004] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.873 [2024-04-26 14:25:01.199027] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.873 [2024-04-26 14:25:01.199042] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.873 [2024-04-26 14:25:01.203066] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.873 [2024-04-26 14:25:01.212161] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.873 [2024-04-26 14:25:01.212689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.212900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.212928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.873 [2024-04-26 14:25:01.212946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.873 [2024-04-26 14:25:01.213214] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.873 [2024-04-26 14:25:01.213480] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.873 [2024-04-26 14:25:01.213503] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.873 [2024-04-26 14:25:01.213524] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.873 [2024-04-26 14:25:01.217571] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.873 [2024-04-26 14:25:01.226686] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.873 [2024-04-26 14:25:01.227236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.227474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.227533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.873 [2024-04-26 14:25:01.227551] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.873 [2024-04-26 14:25:01.227832] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.873 [2024-04-26 14:25:01.228097] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.873 [2024-04-26 14:25:01.228120] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.873 [2024-04-26 14:25:01.228135] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.873 [2024-04-26 14:25:01.232211] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.873 [2024-04-26 14:25:01.241056] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.873 [2024-04-26 14:25:01.241614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.241838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.241889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.873 [2024-04-26 14:25:01.241907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.873 [2024-04-26 14:25:01.242176] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.873 [2024-04-26 14:25:01.242442] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.873 [2024-04-26 14:25:01.242466] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.873 [2024-04-26 14:25:01.242481] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.873 [2024-04-26 14:25:01.246515] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.873 [2024-04-26 14:25:01.255568] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.873 [2024-04-26 14:25:01.256113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.256410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.256439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.873 [2024-04-26 14:25:01.256456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.873 [2024-04-26 14:25:01.256745] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.873 [2024-04-26 14:25:01.257010] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.873 [2024-04-26 14:25:01.257033] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.873 [2024-04-26 14:25:01.257048] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.873 [2024-04-26 14:25:01.261114] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.873 [2024-04-26 14:25:01.270004] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.873 [2024-04-26 14:25:01.270541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.270792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.270840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.873 [2024-04-26 14:25:01.270858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.873 [2024-04-26 14:25:01.271120] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.873 [2024-04-26 14:25:01.271386] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.873 [2024-04-26 14:25:01.271408] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.873 [2024-04-26 14:25:01.271423] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.873 [2024-04-26 14:25:01.275449] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.873 [2024-04-26 14:25:01.284491] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.873 [2024-04-26 14:25:01.284983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.285232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.285261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.873 [2024-04-26 14:25:01.285279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.873 [2024-04-26 14:25:01.285547] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.873 [2024-04-26 14:25:01.285827] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.873 [2024-04-26 14:25:01.285851] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.873 [2024-04-26 14:25:01.285866] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.873 [2024-04-26 14:25:01.289901] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.873 [2024-04-26 14:25:01.299015] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.873 [2024-04-26 14:25:01.299531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.299680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.299710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.873 [2024-04-26 14:25:01.299729] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.873 [2024-04-26 14:25:01.299996] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.873 [2024-04-26 14:25:01.300261] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.873 [2024-04-26 14:25:01.300284] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.873 [2024-04-26 14:25:01.300300] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.873 [2024-04-26 14:25:01.304345] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.873 [2024-04-26 14:25:01.313408] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.873 [2024-04-26 14:25:01.313983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.314264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.314315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.873 [2024-04-26 14:25:01.314332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.873 [2024-04-26 14:25:01.314600] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.873 [2024-04-26 14:25:01.314878] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.873 [2024-04-26 14:25:01.314902] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.873 [2024-04-26 14:25:01.314919] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.873 [2024-04-26 14:25:01.318953] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.873 [2024-04-26 14:25:01.327814] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.873 [2024-04-26 14:25:01.328272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.328446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.873 [2024-04-26 14:25:01.328531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.873 [2024-04-26 14:25:01.328549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.873 [2024-04-26 14:25:01.328831] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.873 [2024-04-26 14:25:01.329099] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.873 [2024-04-26 14:25:01.329122] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.874 [2024-04-26 14:25:01.329137] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.874 [2024-04-26 14:25:01.333171] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.874 [2024-04-26 14:25:01.342233] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.874 [2024-04-26 14:25:01.342764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.874 [2024-04-26 14:25:01.343026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.874 [2024-04-26 14:25:01.343075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.874 [2024-04-26 14:25:01.343093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.874 [2024-04-26 14:25:01.343360] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.874 [2024-04-26 14:25:01.343625] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.874 [2024-04-26 14:25:01.343660] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.874 [2024-04-26 14:25:01.343676] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.874 [2024-04-26 14:25:01.347731] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.874 [2024-04-26 14:25:01.356818] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.874 [2024-04-26 14:25:01.357398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.874 [2024-04-26 14:25:01.357613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.874 [2024-04-26 14:25:01.357673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.874 [2024-04-26 14:25:01.357692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.874 [2024-04-26 14:25:01.357960] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.874 [2024-04-26 14:25:01.358226] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.874 [2024-04-26 14:25:01.358249] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.874 [2024-04-26 14:25:01.358265] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.874 [2024-04-26 14:25:01.362300] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.874 [2024-04-26 14:25:01.371377] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.874 [2024-04-26 14:25:01.371902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.874 [2024-04-26 14:25:01.372157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.874 [2024-04-26 14:25:01.372209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.874 [2024-04-26 14:25:01.372226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.874 [2024-04-26 14:25:01.372494] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.874 [2024-04-26 14:25:01.372773] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.874 [2024-04-26 14:25:01.372798] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.874 [2024-04-26 14:25:01.372813] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.874 [2024-04-26 14:25:01.376833] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.874 [2024-04-26 14:25:01.385870] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.874 [2024-04-26 14:25:01.386341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.874 [2024-04-26 14:25:01.386608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.874 [2024-04-26 14:25:01.386685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.874 [2024-04-26 14:25:01.386704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.874 [2024-04-26 14:25:01.386978] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.874 [2024-04-26 14:25:01.387242] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.874 [2024-04-26 14:25:01.387265] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.874 [2024-04-26 14:25:01.387281] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.874 [2024-04-26 14:25:01.391372] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.874 [2024-04-26 14:25:01.400240] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.874 [2024-04-26 14:25:01.400772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.874 [2024-04-26 14:25:01.401032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.874 [2024-04-26 14:25:01.401089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.874 [2024-04-26 14:25:01.401108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.874 [2024-04-26 14:25:01.401375] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.874 [2024-04-26 14:25:01.401654] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.874 [2024-04-26 14:25:01.401677] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.874 [2024-04-26 14:25:01.401693] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.874 [2024-04-26 14:25:01.405709] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.874 [2024-04-26 14:25:01.414728] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.874 [2024-04-26 14:25:01.415271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.874 [2024-04-26 14:25:01.415546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.874 [2024-04-26 14:25:01.415590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.874 [2024-04-26 14:25:01.415608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.874 [2024-04-26 14:25:01.415888] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.874 [2024-04-26 14:25:01.416155] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.874 [2024-04-26 14:25:01.416178] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.874 [2024-04-26 14:25:01.416193] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.874 [2024-04-26 14:25:01.420255] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.874 [2024-04-26 14:25:01.429183] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.874 [2024-04-26 14:25:01.429696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.874 [2024-04-26 14:25:01.429938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.874 [2024-04-26 14:25:01.429983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:19.874 [2024-04-26 14:25:01.430001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:19.874 [2024-04-26 14:25:01.430270] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:19.874 [2024-04-26 14:25:01.430536] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.874 [2024-04-26 14:25:01.430559] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.874 [2024-04-26 14:25:01.430575] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.874 [2024-04-26 14:25:01.434647] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.132 [2024-04-26 14:25:01.443610] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.132 [2024-04-26 14:25:01.444155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.444381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.444430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.132 [2024-04-26 14:25:01.444453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.132 [2024-04-26 14:25:01.444746] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.132 [2024-04-26 14:25:01.445019] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.132 [2024-04-26 14:25:01.445044] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.132 [2024-04-26 14:25:01.445060] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.132 [2024-04-26 14:25:01.449134] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.132 [2024-04-26 14:25:01.457974] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.132 [2024-04-26 14:25:01.458476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.458704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.458733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.132 [2024-04-26 14:25:01.458751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.132 [2024-04-26 14:25:01.459019] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.132 [2024-04-26 14:25:01.459285] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.132 [2024-04-26 14:25:01.459308] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.132 [2024-04-26 14:25:01.459324] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.132 [2024-04-26 14:25:01.463388] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.132 [2024-04-26 14:25:01.472428] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.132 [2024-04-26 14:25:01.472966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.473199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.473229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.132 [2024-04-26 14:25:01.473247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.132 [2024-04-26 14:25:01.473516] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.132 [2024-04-26 14:25:01.473795] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.132 [2024-04-26 14:25:01.473819] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.132 [2024-04-26 14:25:01.473835] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.132 [2024-04-26 14:25:01.477920] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.132 [2024-04-26 14:25:01.487069] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.132 [2024-04-26 14:25:01.487656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.487900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.487929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.132 [2024-04-26 14:25:01.487947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.132 [2024-04-26 14:25:01.488226] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.132 [2024-04-26 14:25:01.488493] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.132 [2024-04-26 14:25:01.488515] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.132 [2024-04-26 14:25:01.488531] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.132 [2024-04-26 14:25:01.492585] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.132 [2024-04-26 14:25:01.501376] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.132 [2024-04-26 14:25:01.501887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.502141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.502191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.132 [2024-04-26 14:25:01.502208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.132 [2024-04-26 14:25:01.502469] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.132 [2024-04-26 14:25:01.502746] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.132 [2024-04-26 14:25:01.502770] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.132 [2024-04-26 14:25:01.502786] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.132 [2024-04-26 14:25:01.506823] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.132 [2024-04-26 14:25:01.515843] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.132 [2024-04-26 14:25:01.516371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.516546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.516643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.132 [2024-04-26 14:25:01.516664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.132 [2024-04-26 14:25:01.516933] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.132 [2024-04-26 14:25:01.517201] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.132 [2024-04-26 14:25:01.517224] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.132 [2024-04-26 14:25:01.517239] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.132 [2024-04-26 14:25:01.521305] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.132 [2024-04-26 14:25:01.530385] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.132 [2024-04-26 14:25:01.530922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.531216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.531245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.132 [2024-04-26 14:25:01.531264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.132 [2024-04-26 14:25:01.531537] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.132 [2024-04-26 14:25:01.531830] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.132 [2024-04-26 14:25:01.531855] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.132 [2024-04-26 14:25:01.531871] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.132 [2024-04-26 14:25:01.535904] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.132 [2024-04-26 14:25:01.544924] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.132 [2024-04-26 14:25:01.545461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.545621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.545661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.132 [2024-04-26 14:25:01.545680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.132 [2024-04-26 14:25:01.545947] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.132 [2024-04-26 14:25:01.546213] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.132 [2024-04-26 14:25:01.546236] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.132 [2024-04-26 14:25:01.546251] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.132 [2024-04-26 14:25:01.550312] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.132 [2024-04-26 14:25:01.559395] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.132 [2024-04-26 14:25:01.559925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.560164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.560213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.132 [2024-04-26 14:25:01.560231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.132 [2024-04-26 14:25:01.560499] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.132 [2024-04-26 14:25:01.560779] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.132 [2024-04-26 14:25:01.560804] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.132 [2024-04-26 14:25:01.560820] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.132 [2024-04-26 14:25:01.564858] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.132 [2024-04-26 14:25:01.573892] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.132 [2024-04-26 14:25:01.574377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.574617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.574676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.132 [2024-04-26 14:25:01.574694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.132 [2024-04-26 14:25:01.574956] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.132 [2024-04-26 14:25:01.575233] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.132 [2024-04-26 14:25:01.575262] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.132 [2024-04-26 14:25:01.575278] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.132 [2024-04-26 14:25:01.579281] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.132 [2024-04-26 14:25:01.588274] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.132 [2024-04-26 14:25:01.588832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.589072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.589119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.132 [2024-04-26 14:25:01.589138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.132 [2024-04-26 14:25:01.589406] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.132 [2024-04-26 14:25:01.589682] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.132 [2024-04-26 14:25:01.589706] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.132 [2024-04-26 14:25:01.589721] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.132 [2024-04-26 14:25:01.593723] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.132 [2024-04-26 14:25:01.602748] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.132 [2024-04-26 14:25:01.603195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.603450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.603495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.132 [2024-04-26 14:25:01.603513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.132 [2024-04-26 14:25:01.603785] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.132 [2024-04-26 14:25:01.604050] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.132 [2024-04-26 14:25:01.604072] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.132 [2024-04-26 14:25:01.604087] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.132 [2024-04-26 14:25:01.608106] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.132 [2024-04-26 14:25:01.617103] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.132 [2024-04-26 14:25:01.617598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.617851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.132 [2024-04-26 14:25:01.617882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.132 [2024-04-26 14:25:01.617900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.132 [2024-04-26 14:25:01.618168] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.133 [2024-04-26 14:25:01.618434] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.133 [2024-04-26 14:25:01.618456] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.133 [2024-04-26 14:25:01.618477] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.133 [2024-04-26 14:25:01.622521] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.133 [2024-04-26 14:25:01.631557] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.133 [2024-04-26 14:25:01.632092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.133 [2024-04-26 14:25:01.632390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.133 [2024-04-26 14:25:01.632421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.133 [2024-04-26 14:25:01.632438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.133 [2024-04-26 14:25:01.632719] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.133 [2024-04-26 14:25:01.632986] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.133 [2024-04-26 14:25:01.633009] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.133 [2024-04-26 14:25:01.633024] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.133 [2024-04-26 14:25:01.637030] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.133 [2024-04-26 14:25:01.646031] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.133 [2024-04-26 14:25:01.646511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.133 [2024-04-26 14:25:01.646750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.133 [2024-04-26 14:25:01.646800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.133 [2024-04-26 14:25:01.646819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.133 [2024-04-26 14:25:01.647087] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.133 [2024-04-26 14:25:01.647353] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.133 [2024-04-26 14:25:01.647376] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.133 [2024-04-26 14:25:01.647391] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.133 [2024-04-26 14:25:01.651417] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.133 [2024-04-26 14:25:01.660418] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.133 [2024-04-26 14:25:01.660935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.133 [2024-04-26 14:25:01.661146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.133 [2024-04-26 14:25:01.661177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.133 [2024-04-26 14:25:01.661195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.133 [2024-04-26 14:25:01.661462] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.133 [2024-04-26 14:25:01.661747] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.133 [2024-04-26 14:25:01.661772] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.133 [2024-04-26 14:25:01.661787] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.133 [2024-04-26 14:25:01.665800] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.133 [2024-04-26 14:25:01.674798] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.133 [2024-04-26 14:25:01.675232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.133 [2024-04-26 14:25:01.675504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.133 [2024-04-26 14:25:01.675555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.133 [2024-04-26 14:25:01.675573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.133 [2024-04-26 14:25:01.675853] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.133 [2024-04-26 14:25:01.676126] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.133 [2024-04-26 14:25:01.676149] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.133 [2024-04-26 14:25:01.676164] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.133 [2024-04-26 14:25:01.680180] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.133 [2024-04-26 14:25:01.689182] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.133 [2024-04-26 14:25:01.689690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.133 [2024-04-26 14:25:01.689884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.133 [2024-04-26 14:25:01.689934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.133 [2024-04-26 14:25:01.689950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.133 [2024-04-26 14:25:01.690212] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.133 [2024-04-26 14:25:01.690477] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.133 [2024-04-26 14:25:01.690499] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.133 [2024-04-26 14:25:01.690514] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.133 [2024-04-26 14:25:01.694538] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.392 [2024-04-26 14:25:01.703730] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.392 [2024-04-26 14:25:01.704271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.392 [2024-04-26 14:25:01.704411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.392 [2024-04-26 14:25:01.704439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.392 [2024-04-26 14:25:01.704456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.392 [2024-04-26 14:25:01.704744] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.392 [2024-04-26 14:25:01.705012] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.392 [2024-04-26 14:25:01.705035] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.392 [2024-04-26 14:25:01.705050] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.392 [2024-04-26 14:25:01.709067] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.392 [2024-04-26 14:25:01.718107] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.392 [2024-04-26 14:25:01.718578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.392 [2024-04-26 14:25:01.718729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.392 [2024-04-26 14:25:01.718760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.392 [2024-04-26 14:25:01.718778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.392 [2024-04-26 14:25:01.719046] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.392 [2024-04-26 14:25:01.719312] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.392 [2024-04-26 14:25:01.719334] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.392 [2024-04-26 14:25:01.719350] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.392 [2024-04-26 14:25:01.723369] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.392 [2024-04-26 14:25:01.732569] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.392 [2024-04-26 14:25:01.733077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.392 [2024-04-26 14:25:01.733331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.392 [2024-04-26 14:25:01.733382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.392 [2024-04-26 14:25:01.733399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.392 [2024-04-26 14:25:01.733671] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.392 [2024-04-26 14:25:01.733936] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.392 [2024-04-26 14:25:01.733959] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.392 [2024-04-26 14:25:01.733974] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.392 [2024-04-26 14:25:01.737978] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.392 [2024-04-26 14:25:01.746991] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.392 [2024-04-26 14:25:01.747485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.392 [2024-04-26 14:25:01.747734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.392 [2024-04-26 14:25:01.747765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.392 [2024-04-26 14:25:01.747783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.392 [2024-04-26 14:25:01.748050] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.392 [2024-04-26 14:25:01.748316] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.392 [2024-04-26 14:25:01.748340] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.392 [2024-04-26 14:25:01.748355] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.392 [2024-04-26 14:25:01.752380] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.392 [2024-04-26 14:25:01.761290] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.392 [2024-04-26 14:25:01.761865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.392 [2024-04-26 14:25:01.762127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.392 [2024-04-26 14:25:01.762178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.392 [2024-04-26 14:25:01.762196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.392 [2024-04-26 14:25:01.762463] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.392 [2024-04-26 14:25:01.762751] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.392 [2024-04-26 14:25:01.762775] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.392 [2024-04-26 14:25:01.762790] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.392 [2024-04-26 14:25:01.766806] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.392 [2024-04-26 14:25:01.775840] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.392 [2024-04-26 14:25:01.776371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.392 [2024-04-26 14:25:01.776521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.392 [2024-04-26 14:25:01.776551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.392 [2024-04-26 14:25:01.776569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.392 [2024-04-26 14:25:01.776848] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.392 [2024-04-26 14:25:01.777115] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.392 [2024-04-26 14:25:01.777138] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.392 [2024-04-26 14:25:01.777153] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.392 [2024-04-26 14:25:01.781196] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.392 [2024-04-26 14:25:01.790300] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.392 [2024-04-26 14:25:01.790847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.392 [2024-04-26 14:25:01.791076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.392 [2024-04-26 14:25:01.791105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.392 [2024-04-26 14:25:01.791123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.392 [2024-04-26 14:25:01.791390] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.392 [2024-04-26 14:25:01.791668] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.392 [2024-04-26 14:25:01.791692] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.392 [2024-04-26 14:25:01.791707] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.392 [2024-04-26 14:25:01.795704] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.392 [2024-04-26 14:25:01.804758] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.392 [2024-04-26 14:25:01.805302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.392 [2024-04-26 14:25:01.805586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.392 [2024-04-26 14:25:01.805652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.392 [2024-04-26 14:25:01.805672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.393 [2024-04-26 14:25:01.805940] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.393 [2024-04-26 14:25:01.806206] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.393 [2024-04-26 14:25:01.806229] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.393 [2024-04-26 14:25:01.806244] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.393 [2024-04-26 14:25:01.810267] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.393 [2024-04-26 14:25:01.819298] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.393 [2024-04-26 14:25:01.819773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.393 [2024-04-26 14:25:01.820011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.393 [2024-04-26 14:25:01.820059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.393 [2024-04-26 14:25:01.820077] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.393 [2024-04-26 14:25:01.820345] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.393 [2024-04-26 14:25:01.820610] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.393 [2024-04-26 14:25:01.820644] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.393 [2024-04-26 14:25:01.820662] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.393 [2024-04-26 14:25:01.824688] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.393 [2024-04-26 14:25:01.833690] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.393 [2024-04-26 14:25:01.834264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.393 [2024-04-26 14:25:01.834523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.393 [2024-04-26 14:25:01.834574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.393 [2024-04-26 14:25:01.834592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.393 [2024-04-26 14:25:01.834872] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.393 [2024-04-26 14:25:01.835138] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.393 [2024-04-26 14:25:01.835161] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.393 [2024-04-26 14:25:01.835176] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.393 [2024-04-26 14:25:01.839214] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.393 [2024-04-26 14:25:01.848027] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.393 [2024-04-26 14:25:01.848554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.393 [2024-04-26 14:25:01.848785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.393 [2024-04-26 14:25:01.848834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.393 [2024-04-26 14:25:01.848858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.393 [2024-04-26 14:25:01.849126] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.393 [2024-04-26 14:25:01.849392] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.393 [2024-04-26 14:25:01.849415] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.393 [2024-04-26 14:25:01.849431] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.393 [2024-04-26 14:25:01.853435] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.393 [2024-04-26 14:25:01.862443] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.393 [2024-04-26 14:25:01.862996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.393 [2024-04-26 14:25:01.863253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.393 [2024-04-26 14:25:01.863303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.393 [2024-04-26 14:25:01.863321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.393 [2024-04-26 14:25:01.863589] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.393 [2024-04-26 14:25:01.863866] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.393 [2024-04-26 14:25:01.863889] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.393 [2024-04-26 14:25:01.863905] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.393 [2024-04-26 14:25:01.867914] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.393 [2024-04-26 14:25:01.876953] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.393 [2024-04-26 14:25:01.877493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.393 [2024-04-26 14:25:01.877743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.393 [2024-04-26 14:25:01.877793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.393 [2024-04-26 14:25:01.877811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.393 [2024-04-26 14:25:01.878079] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.393 [2024-04-26 14:25:01.878346] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.393 [2024-04-26 14:25:01.878368] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.393 [2024-04-26 14:25:01.878383] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.393 [2024-04-26 14:25:01.882416] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.393 [2024-04-26 14:25:01.891425] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.393 [2024-04-26 14:25:01.891868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.393 [2024-04-26 14:25:01.892183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.393 [2024-04-26 14:25:01.892212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.393 [2024-04-26 14:25:01.892229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.393 [2024-04-26 14:25:01.892497] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.393 [2024-04-26 14:25:01.892774] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.393 [2024-04-26 14:25:01.892797] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.393 [2024-04-26 14:25:01.892812] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.393 [2024-04-26 14:25:01.896850] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.393 [2024-04-26 14:25:01.905885] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.393 [2024-04-26 14:25:01.906318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.393 [2024-04-26 14:25:01.906584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.393 [2024-04-26 14:25:01.906640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.393 [2024-04-26 14:25:01.906660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.393 [2024-04-26 14:25:01.906922] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.393 [2024-04-26 14:25:01.907187] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.393 [2024-04-26 14:25:01.907209] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.393 [2024-04-26 14:25:01.907224] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.393 [2024-04-26 14:25:01.911255] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.393 [2024-04-26 14:25:01.920253] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.393 [2024-04-26 14:25:01.920732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.393 [2024-04-26 14:25:01.920988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.393 [2024-04-26 14:25:01.921039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.393 [2024-04-26 14:25:01.921057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.393 [2024-04-26 14:25:01.921325] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.393 [2024-04-26 14:25:01.921591] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.393 [2024-04-26 14:25:01.921614] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.393 [2024-04-26 14:25:01.921629] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.393 [2024-04-26 14:25:01.925654] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.393 [2024-04-26 14:25:01.934679] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.393 [2024-04-26 14:25:01.935155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.393 [2024-04-26 14:25:01.935364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.393 [2024-04-26 14:25:01.935413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.393 [2024-04-26 14:25:01.935431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.393 [2024-04-26 14:25:01.935715] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.393 [2024-04-26 14:25:01.935988] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.393 [2024-04-26 14:25:01.936011] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.393 [2024-04-26 14:25:01.936026] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.393 [2024-04-26 14:25:01.940057] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.394 [2024-04-26 14:25:01.949074] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.394 [2024-04-26 14:25:01.949663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.394 [2024-04-26 14:25:01.949942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.394 [2024-04-26 14:25:01.949972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.394 [2024-04-26 14:25:01.949989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.394 [2024-04-26 14:25:01.950257] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.394 [2024-04-26 14:25:01.950524] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.394 [2024-04-26 14:25:01.950546] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.394 [2024-04-26 14:25:01.950561] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.394 [2024-04-26 14:25:01.954580] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.654 [2024-04-26 14:25:01.963508] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.654 [2024-04-26 14:25:01.964017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.654 [2024-04-26 14:25:01.964265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.654 [2024-04-26 14:25:01.964318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.654 [2024-04-26 14:25:01.964335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.654 [2024-04-26 14:25:01.964599] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.654 [2024-04-26 14:25:01.964891] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.654 [2024-04-26 14:25:01.964918] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.654 [2024-04-26 14:25:01.964933] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.654 [2024-04-26 14:25:01.968979] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.654 [2024-04-26 14:25:01.978033] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.654 [2024-04-26 14:25:01.978481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.654 [2024-04-26 14:25:01.978680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.654 [2024-04-26 14:25:01.978737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.654 [2024-04-26 14:25:01.978755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.654 [2024-04-26 14:25:01.979017] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.654 [2024-04-26 14:25:01.979288] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.654 [2024-04-26 14:25:01.979315] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.654 [2024-04-26 14:25:01.979331] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.654 [2024-04-26 14:25:01.983507] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.654 [2024-04-26 14:25:01.992533] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.654 [2024-04-26 14:25:01.993066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.654 [2024-04-26 14:25:01.993205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.654 [2024-04-26 14:25:01.993234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.654 [2024-04-26 14:25:01.993251] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.654 [2024-04-26 14:25:01.993515] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.654 [2024-04-26 14:25:01.993788] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.654 [2024-04-26 14:25:01.993811] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.654 [2024-04-26 14:25:01.993826] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.654 [2024-04-26 14:25:01.997829] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.654 [2024-04-26 14:25:02.007029] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.654 [2024-04-26 14:25:02.007502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.654 [2024-04-26 14:25:02.007667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.654 [2024-04-26 14:25:02.007712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.654 [2024-04-26 14:25:02.007743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.654 [2024-04-26 14:25:02.008005] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.654 [2024-04-26 14:25:02.008269] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.654 [2024-04-26 14:25:02.008291] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.654 [2024-04-26 14:25:02.008306] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.654 [2024-04-26 14:25:02.012338] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.654 [2024-04-26 14:25:02.021350] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.654 [2024-04-26 14:25:02.021799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.654 [2024-04-26 14:25:02.022005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.654 [2024-04-26 14:25:02.022054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.654 [2024-04-26 14:25:02.022071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.654 [2024-04-26 14:25:02.022339] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.654 [2024-04-26 14:25:02.022602] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.654 [2024-04-26 14:25:02.022624] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.654 [2024-04-26 14:25:02.022657] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.654 [2024-04-26 14:25:02.026660] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.654 [2024-04-26 14:25:02.035880] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.654 [2024-04-26 14:25:02.036284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.654 [2024-04-26 14:25:02.036552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.654 [2024-04-26 14:25:02.036580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.654 [2024-04-26 14:25:02.036598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.654 [2024-04-26 14:25:02.036878] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.654 [2024-04-26 14:25:02.037146] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.654 [2024-04-26 14:25:02.037168] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.654 [2024-04-26 14:25:02.037183] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.654 [2024-04-26 14:25:02.041190] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.654 [2024-04-26 14:25:02.050191] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.654 [2024-04-26 14:25:02.050648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.655 [2024-04-26 14:25:02.050798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.655 [2024-04-26 14:25:02.050828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.655 [2024-04-26 14:25:02.050846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.655 [2024-04-26 14:25:02.051126] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.655 [2024-04-26 14:25:02.051391] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.655 [2024-04-26 14:25:02.051413] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.655 [2024-04-26 14:25:02.051428] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.655 [2024-04-26 14:25:02.055445] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.655 [2024-04-26 14:25:02.064660] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.655 [2024-04-26 14:25:02.065127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.655 [2024-04-26 14:25:02.065283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.655 [2024-04-26 14:25:02.065311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.655 [2024-04-26 14:25:02.065328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.655 [2024-04-26 14:25:02.065595] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.655 [2024-04-26 14:25:02.065872] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.655 [2024-04-26 14:25:02.065895] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.655 [2024-04-26 14:25:02.065910] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.655 [2024-04-26 14:25:02.069915] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.655 [2024-04-26 14:25:02.079131] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.655 [2024-04-26 14:25:02.079702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.655 [2024-04-26 14:25:02.079912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.655 [2024-04-26 14:25:02.079968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.655 [2024-04-26 14:25:02.079986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.655 [2024-04-26 14:25:02.080254] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.655 [2024-04-26 14:25:02.080519] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.655 [2024-04-26 14:25:02.080541] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.655 [2024-04-26 14:25:02.080556] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.655 [2024-04-26 14:25:02.084559] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.655 [2024-04-26 14:25:02.093557] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.655 [2024-04-26 14:25:02.094084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.655 [2024-04-26 14:25:02.094242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.655 [2024-04-26 14:25:02.094270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.655 [2024-04-26 14:25:02.094288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.655 [2024-04-26 14:25:02.094549] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.655 [2024-04-26 14:25:02.094822] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.655 [2024-04-26 14:25:02.094844] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.655 [2024-04-26 14:25:02.094859] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.655 [2024-04-26 14:25:02.098855] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.655 [2024-04-26 14:25:02.108078] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.655 [2024-04-26 14:25:02.108533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.655 [2024-04-26 14:25:02.108697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.655 [2024-04-26 14:25:02.108725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.655 [2024-04-26 14:25:02.108742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.655 [2024-04-26 14:25:02.109003] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.655 [2024-04-26 14:25:02.109268] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.655 [2024-04-26 14:25:02.109289] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.655 [2024-04-26 14:25:02.109303] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.655 [2024-04-26 14:25:02.113303] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.655 [2024-04-26 14:25:02.122517] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.655 [2024-04-26 14:25:02.122946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.655 [2024-04-26 14:25:02.123082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.655 [2024-04-26 14:25:02.123108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.655 [2024-04-26 14:25:02.123125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.655 [2024-04-26 14:25:02.123386] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.655 [2024-04-26 14:25:02.123659] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.655 [2024-04-26 14:25:02.123681] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.655 [2024-04-26 14:25:02.123696] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.655 [2024-04-26 14:25:02.127695] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.655 [2024-04-26 14:25:02.136892] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.655 [2024-04-26 14:25:02.137560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.655 [2024-04-26 14:25:02.137731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.655 [2024-04-26 14:25:02.137761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.655 [2024-04-26 14:25:02.137779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.655 [2024-04-26 14:25:02.138047] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.655 [2024-04-26 14:25:02.138313] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.655 [2024-04-26 14:25:02.138335] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.655 [2024-04-26 14:25:02.138350] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.655 [2024-04-26 14:25:02.142355] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.655 [2024-04-26 14:25:02.151337] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.655 [2024-04-26 14:25:02.151759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.655 [2024-04-26 14:25:02.151970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.655 [2024-04-26 14:25:02.152024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.655 [2024-04-26 14:25:02.152041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.655 [2024-04-26 14:25:02.152306] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.655 [2024-04-26 14:25:02.152570] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.655 [2024-04-26 14:25:02.152591] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.655 [2024-04-26 14:25:02.152606] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.655 [2024-04-26 14:25:02.156645] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.655 [2024-04-26 14:25:02.165682] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.655 [2024-04-26 14:25:02.166214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.655 [2024-04-26 14:25:02.166423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.655 [2024-04-26 14:25:02.166478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.655 [2024-04-26 14:25:02.166495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.655 [2024-04-26 14:25:02.166769] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.655 [2024-04-26 14:25:02.167039] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.655 [2024-04-26 14:25:02.167061] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.655 [2024-04-26 14:25:02.167076] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.655 [2024-04-26 14:25:02.171085] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.655 [2024-04-26 14:25:02.180123] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.655 [2024-04-26 14:25:02.180613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.655 [2024-04-26 14:25:02.180814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.655 [2024-04-26 14:25:02.180842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.655 [2024-04-26 14:25:02.180859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.656 [2024-04-26 14:25:02.181120] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.656 [2024-04-26 14:25:02.181384] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.656 [2024-04-26 14:25:02.181406] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.656 [2024-04-26 14:25:02.181420] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.656 [2024-04-26 14:25:02.185427] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.656 [2024-04-26 14:25:02.194655] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.656 [2024-04-26 14:25:02.195096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.656 [2024-04-26 14:25:02.195266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.656 [2024-04-26 14:25:02.195296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.656 [2024-04-26 14:25:02.195314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.656 [2024-04-26 14:25:02.195582] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.656 [2024-04-26 14:25:02.195859] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.656 [2024-04-26 14:25:02.195883] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.656 [2024-04-26 14:25:02.195899] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.656 [2024-04-26 14:25:02.199903] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.656 [2024-04-26 14:25:02.209116] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.656 [2024-04-26 14:25:02.209620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.656 [2024-04-26 14:25:02.209796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.656 [2024-04-26 14:25:02.209827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.656 [2024-04-26 14:25:02.209846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.656 [2024-04-26 14:25:02.210114] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.656 [2024-04-26 14:25:02.210378] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.656 [2024-04-26 14:25:02.210400] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.656 [2024-04-26 14:25:02.210416] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.656 [2024-04-26 14:25:02.214425] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.916 [2024-04-26 14:25:02.223650] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.916 [2024-04-26 14:25:02.224167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.916 [2024-04-26 14:25:02.224427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.916 [2024-04-26 14:25:02.224476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.916 [2024-04-26 14:25:02.224494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.916 [2024-04-26 14:25:02.224772] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.916 [2024-04-26 14:25:02.225038] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.916 [2024-04-26 14:25:02.225060] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.916 [2024-04-26 14:25:02.225076] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.916 [2024-04-26 14:25:02.229181] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.916 [2024-04-26 14:25:02.238023] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.916 [2024-04-26 14:25:02.238543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.916 [2024-04-26 14:25:02.238843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.916 [2024-04-26 14:25:02.238892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.916 [2024-04-26 14:25:02.238909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.916 [2024-04-26 14:25:02.239172] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.916 [2024-04-26 14:25:02.239436] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.916 [2024-04-26 14:25:02.239457] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.916 [2024-04-26 14:25:02.239472] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.916 [2024-04-26 14:25:02.243528] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.916 [2024-04-26 14:25:02.252364] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.916 [2024-04-26 14:25:02.252901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.916 [2024-04-26 14:25:02.253113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.916 [2024-04-26 14:25:02.253141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.916 [2024-04-26 14:25:02.253168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.916 [2024-04-26 14:25:02.253455] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.916 [2024-04-26 14:25:02.253737] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.916 [2024-04-26 14:25:02.253760] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.916 [2024-04-26 14:25:02.253775] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.916 [2024-04-26 14:25:02.257850] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.916 [2024-04-26 14:25:02.266920] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.916 [2024-04-26 14:25:02.267392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.916 [2024-04-26 14:25:02.267531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.916 [2024-04-26 14:25:02.267559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.916 [2024-04-26 14:25:02.267576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.916 [2024-04-26 14:25:02.267851] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.916 [2024-04-26 14:25:02.268116] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.916 [2024-04-26 14:25:02.268138] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.916 [2024-04-26 14:25:02.268153] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.916 [2024-04-26 14:25:02.272203] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.916 [2024-04-26 14:25:02.281208] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.916 [2024-04-26 14:25:02.281789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.916 [2024-04-26 14:25:02.281969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.916 [2024-04-26 14:25:02.281998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.916 [2024-04-26 14:25:02.282016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.916 [2024-04-26 14:25:02.282283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.916 [2024-04-26 14:25:02.282548] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.916 [2024-04-26 14:25:02.282570] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.916 [2024-04-26 14:25:02.282585] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.916 [2024-04-26 14:25:02.286598] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.916 [2024-04-26 14:25:02.295619] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.916 [2024-04-26 14:25:02.296097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.916 [2024-04-26 14:25:02.296268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.916 [2024-04-26 14:25:02.296297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.916 [2024-04-26 14:25:02.296314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.916 [2024-04-26 14:25:02.296587] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.916 [2024-04-26 14:25:02.296867] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.917 [2024-04-26 14:25:02.296891] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.917 [2024-04-26 14:25:02.296912] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.917 [2024-04-26 14:25:02.300965] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.917 [2024-04-26 14:25:02.310010] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.917 [2024-04-26 14:25:02.310523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.917 [2024-04-26 14:25:02.310823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.917 [2024-04-26 14:25:02.310851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.917 [2024-04-26 14:25:02.310868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.917 [2024-04-26 14:25:02.311129] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.917 [2024-04-26 14:25:02.311393] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.917 [2024-04-26 14:25:02.311415] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.917 [2024-04-26 14:25:02.311429] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.917 [2024-04-26 14:25:02.315473] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.917 [2024-04-26 14:25:02.324615] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.917 [2024-04-26 14:25:02.325128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.917 [2024-04-26 14:25:02.325372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.917 [2024-04-26 14:25:02.325421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.917 [2024-04-26 14:25:02.325439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.917 [2024-04-26 14:25:02.325720] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.917 [2024-04-26 14:25:02.325986] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.917 [2024-04-26 14:25:02.326008] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.917 [2024-04-26 14:25:02.326023] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.917 [2024-04-26 14:25:02.330058] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.917 [2024-04-26 14:25:02.339097] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.917 [2024-04-26 14:25:02.339628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.917 [2024-04-26 14:25:02.339793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.917 [2024-04-26 14:25:02.339822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.917 [2024-04-26 14:25:02.339839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.917 [2024-04-26 14:25:02.340107] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.917 [2024-04-26 14:25:02.340378] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.917 [2024-04-26 14:25:02.340400] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.917 [2024-04-26 14:25:02.340415] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.917 [2024-04-26 14:25:02.344463] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.917 [2024-04-26 14:25:02.353552] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.917 [2024-04-26 14:25:02.354164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.917 [2024-04-26 14:25:02.354363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.917 [2024-04-26 14:25:02.354397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.917 [2024-04-26 14:25:02.354427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.917 [2024-04-26 14:25:02.354709] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.917 [2024-04-26 14:25:02.354980] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.917 [2024-04-26 14:25:02.355002] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.917 [2024-04-26 14:25:02.355017] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.917 [2024-04-26 14:25:02.359069] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.917 [2024-04-26 14:25:02.367895] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.917 [2024-04-26 14:25:02.368421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.917 [2024-04-26 14:25:02.368586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.917 [2024-04-26 14:25:02.368614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.917 [2024-04-26 14:25:02.368643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.917 [2024-04-26 14:25:02.368914] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.917 [2024-04-26 14:25:02.369185] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.917 [2024-04-26 14:25:02.369207] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.917 [2024-04-26 14:25:02.369222] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.917 [2024-04-26 14:25:02.373264] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.917 [2024-04-26 14:25:02.382359] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.917 [2024-04-26 14:25:02.382847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.917 [2024-04-26 14:25:02.382982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.917 [2024-04-26 14:25:02.383008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.917 [2024-04-26 14:25:02.383025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.917 [2024-04-26 14:25:02.383286] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.917 [2024-04-26 14:25:02.383549] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.917 [2024-04-26 14:25:02.383576] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.917 [2024-04-26 14:25:02.383591] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.917 [2024-04-26 14:25:02.387655] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.917 [2024-04-26 14:25:02.396713] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.917 [2024-04-26 14:25:02.397209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.917 [2024-04-26 14:25:02.397406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.917 [2024-04-26 14:25:02.397453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.917 [2024-04-26 14:25:02.397470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.917 [2024-04-26 14:25:02.397742] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.917 [2024-04-26 14:25:02.398006] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.917 [2024-04-26 14:25:02.398028] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.917 [2024-04-26 14:25:02.398043] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.917 [2024-04-26 14:25:02.402078] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.917 [2024-04-26 14:25:02.411127] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.917 [2024-04-26 14:25:02.411606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.917 [2024-04-26 14:25:02.411833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.917 [2024-04-26 14:25:02.411883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.917 [2024-04-26 14:25:02.411899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.917 [2024-04-26 14:25:02.412161] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.917 [2024-04-26 14:25:02.412424] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.917 [2024-04-26 14:25:02.412445] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.917 [2024-04-26 14:25:02.412460] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.917 [2024-04-26 14:25:02.416508] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.917 [2024-04-26 14:25:02.425591] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.917 [2024-04-26 14:25:02.426042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.917 [2024-04-26 14:25:02.426150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.917 [2024-04-26 14:25:02.426179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.917 [2024-04-26 14:25:02.426196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.917 [2024-04-26 14:25:02.426457] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.917 [2024-04-26 14:25:02.426732] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.917 [2024-04-26 14:25:02.426754] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.917 [2024-04-26 14:25:02.426775] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.917 [2024-04-26 14:25:02.430835] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.917 [2024-04-26 14:25:02.440122] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.918 [2024-04-26 14:25:02.440582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.918 [2024-04-26 14:25:02.440823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.918 [2024-04-26 14:25:02.440850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.918 [2024-04-26 14:25:02.440867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.918 [2024-04-26 14:25:02.441128] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.918 [2024-04-26 14:25:02.441391] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.918 [2024-04-26 14:25:02.441413] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.918 [2024-04-26 14:25:02.441428] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.918 [2024-04-26 14:25:02.445480] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.918 [2024-04-26 14:25:02.454512] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.918 [2024-04-26 14:25:02.455068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.918 [2024-04-26 14:25:02.455266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.918 [2024-04-26 14:25:02.455326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.918 [2024-04-26 14:25:02.455343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.918 [2024-04-26 14:25:02.455611] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.918 [2024-04-26 14:25:02.455898] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.918 [2024-04-26 14:25:02.455921] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.918 [2024-04-26 14:25:02.455936] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.918 [2024-04-26 14:25:02.459977] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.918 [2024-04-26 14:25:02.469032] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:20.918 [2024-04-26 14:25:02.469464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.918 [2024-04-26 14:25:02.469625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:20.918 [2024-04-26 14:25:02.469661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:20.918 [2024-04-26 14:25:02.469678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:20.918 [2024-04-26 14:25:02.469940] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:20.918 [2024-04-26 14:25:02.470203] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:20.918 [2024-04-26 14:25:02.470224] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:20.918 [2024-04-26 14:25:02.470239] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:20.918 [2024-04-26 14:25:02.474273] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:20.918 [2024-04-26 14:25:02.483705] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.178 [2024-04-26 14:25:02.484235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.484483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.484531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.178 [2024-04-26 14:25:02.484549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.178 [2024-04-26 14:25:02.484836] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.178 [2024-04-26 14:25:02.485103] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.178 [2024-04-26 14:25:02.485124] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.178 [2024-04-26 14:25:02.485139] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.178 [2024-04-26 14:25:02.489260] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.178 [2024-04-26 14:25:02.498182] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.178 [2024-04-26 14:25:02.498595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.498795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.498843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.178 [2024-04-26 14:25:02.498861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.178 [2024-04-26 14:25:02.499123] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.178 [2024-04-26 14:25:02.499387] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.178 [2024-04-26 14:25:02.499409] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.178 [2024-04-26 14:25:02.499424] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.178 [2024-04-26 14:25:02.503439] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.178 [2024-04-26 14:25:02.512750] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.178 [2024-04-26 14:25:02.513295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.513472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.513498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.178 [2024-04-26 14:25:02.513515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.178 [2024-04-26 14:25:02.513790] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.178 [2024-04-26 14:25:02.514061] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.178 [2024-04-26 14:25:02.514083] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.178 [2024-04-26 14:25:02.514098] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.178 [2024-04-26 14:25:02.518140] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.178 [2024-04-26 14:25:02.527188] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.178 [2024-04-26 14:25:02.527663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.527826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.527853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.178 [2024-04-26 14:25:02.527870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.178 [2024-04-26 14:25:02.528132] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.178 [2024-04-26 14:25:02.528396] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.178 [2024-04-26 14:25:02.528417] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.178 [2024-04-26 14:25:02.528432] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.178 [2024-04-26 14:25:02.532470] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.178 [2024-04-26 14:25:02.541543] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.178 [2024-04-26 14:25:02.541966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.542175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.542226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.178 [2024-04-26 14:25:02.542243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.178 [2024-04-26 14:25:02.542511] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.178 [2024-04-26 14:25:02.542790] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.178 [2024-04-26 14:25:02.542813] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.178 [2024-04-26 14:25:02.542828] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.178 [2024-04-26 14:25:02.546885] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.178 [2024-04-26 14:25:02.555987] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.178 [2024-04-26 14:25:02.556506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.556666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.556693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.178 [2024-04-26 14:25:02.556711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.178 [2024-04-26 14:25:02.556972] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.178 [2024-04-26 14:25:02.557236] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.178 [2024-04-26 14:25:02.557257] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.178 [2024-04-26 14:25:02.557272] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.178 [2024-04-26 14:25:02.561282] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.178 [2024-04-26 14:25:02.570362] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.178 [2024-04-26 14:25:02.570842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.570991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.571019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.178 [2024-04-26 14:25:02.571037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.178 [2024-04-26 14:25:02.571304] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.178 [2024-04-26 14:25:02.571570] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.178 [2024-04-26 14:25:02.571591] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.178 [2024-04-26 14:25:02.571606] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.178 [2024-04-26 14:25:02.575672] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.178 [2024-04-26 14:25:02.584764] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.178 [2024-04-26 14:25:02.585173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.585358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.585384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.178 [2024-04-26 14:25:02.585401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.178 [2024-04-26 14:25:02.585674] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.178 [2024-04-26 14:25:02.585939] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.178 [2024-04-26 14:25:02.585960] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.178 [2024-04-26 14:25:02.585974] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.178 [2024-04-26 14:25:02.590007] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.178 [2024-04-26 14:25:02.599279] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.178 [2024-04-26 14:25:02.599788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.599982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.600042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.178 [2024-04-26 14:25:02.600060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.178 [2024-04-26 14:25:02.600327] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.178 [2024-04-26 14:25:02.600593] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.178 [2024-04-26 14:25:02.600614] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.178 [2024-04-26 14:25:02.600639] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.178 [2024-04-26 14:25:02.604693] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.178 [2024-04-26 14:25:02.613780] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.178 [2024-04-26 14:25:02.614374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.614537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.614566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.178 [2024-04-26 14:25:02.614584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.178 [2024-04-26 14:25:02.614863] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.178 [2024-04-26 14:25:02.615130] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.178 [2024-04-26 14:25:02.615152] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.178 [2024-04-26 14:25:02.615167] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.178 [2024-04-26 14:25:02.619183] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.178 [2024-04-26 14:25:02.628248] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.178 [2024-04-26 14:25:02.628854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.629061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.629110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.178 [2024-04-26 14:25:02.629128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.178 [2024-04-26 14:25:02.629397] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.178 [2024-04-26 14:25:02.629673] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.178 [2024-04-26 14:25:02.629696] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.178 [2024-04-26 14:25:02.629711] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.178 [2024-04-26 14:25:02.633754] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.178 [2024-04-26 14:25:02.642626] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.178 [2024-04-26 14:25:02.643176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.643499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.643527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.178 [2024-04-26 14:25:02.643544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.178 [2024-04-26 14:25:02.643826] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.178 [2024-04-26 14:25:02.644092] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.178 [2024-04-26 14:25:02.644113] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.178 [2024-04-26 14:25:02.644128] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.178 [2024-04-26 14:25:02.648171] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.178 [2024-04-26 14:25:02.656959] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.178 [2024-04-26 14:25:02.657473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.657646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.178 [2024-04-26 14:25:02.657673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.179 [2024-04-26 14:25:02.657695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.179 [2024-04-26 14:25:02.657957] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.179 [2024-04-26 14:25:02.658221] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.179 [2024-04-26 14:25:02.658242] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.179 [2024-04-26 14:25:02.658257] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.179 [2024-04-26 14:25:02.662290] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.179 [2024-04-26 14:25:02.671357] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.179 [2024-04-26 14:25:02.671949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.179 [2024-04-26 14:25:02.672171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.179 [2024-04-26 14:25:02.672219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.179 [2024-04-26 14:25:02.672237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.179 [2024-04-26 14:25:02.672504] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.179 [2024-04-26 14:25:02.672783] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.179 [2024-04-26 14:25:02.672806] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.179 [2024-04-26 14:25:02.672821] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.179 [2024-04-26 14:25:02.676892] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.179 [2024-04-26 14:25:02.685707] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.179 [2024-04-26 14:25:02.686178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.179 [2024-04-26 14:25:02.686344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.179 [2024-04-26 14:25:02.686371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.179 [2024-04-26 14:25:02.686388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.179 [2024-04-26 14:25:02.686660] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.179 [2024-04-26 14:25:02.686926] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.179 [2024-04-26 14:25:02.686947] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.179 [2024-04-26 14:25:02.686962] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.179 [2024-04-26 14:25:02.690998] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.179 [2024-04-26 14:25:02.700056] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.179 [2024-04-26 14:25:02.700531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.179 [2024-04-26 14:25:02.700697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.179 [2024-04-26 14:25:02.700724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.179 [2024-04-26 14:25:02.700741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.179 [2024-04-26 14:25:02.701010] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.179 [2024-04-26 14:25:02.701274] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.179 [2024-04-26 14:25:02.701295] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.179 [2024-04-26 14:25:02.701310] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.179 [2024-04-26 14:25:02.705348] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.179 [2024-04-26 14:25:02.714490] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.179 [2024-04-26 14:25:02.715017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.179 [2024-04-26 14:25:02.715176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.179 [2024-04-26 14:25:02.715206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.179 [2024-04-26 14:25:02.715224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.179 [2024-04-26 14:25:02.715491] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.179 [2024-04-26 14:25:02.715777] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.179 [2024-04-26 14:25:02.715801] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.179 [2024-04-26 14:25:02.715816] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.179 [2024-04-26 14:25:02.719861] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.179 [2024-04-26 14:25:02.728906] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.179 [2024-04-26 14:25:02.729428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.179 [2024-04-26 14:25:02.729587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.179 [2024-04-26 14:25:02.729617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.179 [2024-04-26 14:25:02.729646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.179 [2024-04-26 14:25:02.729916] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.179 [2024-04-26 14:25:02.730185] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.179 [2024-04-26 14:25:02.730206] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.179 [2024-04-26 14:25:02.730221] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.179 [2024-04-26 14:25:02.734261] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.179 [2024-04-26 14:25:02.743430] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.179 [2024-04-26 14:25:02.744013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.179 [2024-04-26 14:25:02.744207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.179 [2024-04-26 14:25:02.744238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.179 [2024-04-26 14:25:02.744257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.179 [2024-04-26 14:25:02.744528] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.179 [2024-04-26 14:25:02.744823] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.179 [2024-04-26 14:25:02.744848] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.179 [2024-04-26 14:25:02.744863] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.439 [2024-04-26 14:25:02.748986] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.439 [2024-04-26 14:25:02.757889] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.439 [2024-04-26 14:25:02.758319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.439 [2024-04-26 14:25:02.758549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.439 [2024-04-26 14:25:02.758600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.439 [2024-04-26 14:25:02.758618] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.439 [2024-04-26 14:25:02.758897] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.439 [2024-04-26 14:25:02.759163] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.439 [2024-04-26 14:25:02.759184] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.439 [2024-04-26 14:25:02.759199] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.439 [2024-04-26 14:25:02.763236] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.439 [2024-04-26 14:25:02.772342] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.439 [2024-04-26 14:25:02.772830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.439 [2024-04-26 14:25:02.773043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.439 [2024-04-26 14:25:02.773103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.439 [2024-04-26 14:25:02.773120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.439 [2024-04-26 14:25:02.773382] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.439 [2024-04-26 14:25:02.773658] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.439 [2024-04-26 14:25:02.773680] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.439 [2024-04-26 14:25:02.773695] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.439 [2024-04-26 14:25:02.777741] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.439 [2024-04-26 14:25:02.786981] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.439 [2024-04-26 14:25:02.787448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.439 [2024-04-26 14:25:02.787665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.439 [2024-04-26 14:25:02.787717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.439 [2024-04-26 14:25:02.787734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.439 [2024-04-26 14:25:02.787996] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.439 [2024-04-26 14:25:02.788260] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.439 [2024-04-26 14:25:02.788303] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.439 [2024-04-26 14:25:02.788318] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.439 [2024-04-26 14:25:02.792351] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.439 [2024-04-26 14:25:02.801397] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.439 [2024-04-26 14:25:02.801874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.439 [2024-04-26 14:25:02.802108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.439 [2024-04-26 14:25:02.802155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.439 [2024-04-26 14:25:02.802173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.439 [2024-04-26 14:25:02.802441] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.439 [2024-04-26 14:25:02.802727] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.439 [2024-04-26 14:25:02.802750] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.439 [2024-04-26 14:25:02.802765] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.439 [2024-04-26 14:25:02.806809] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.439 [2024-04-26 14:25:02.815980] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.439 [2024-04-26 14:25:02.816496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.439 [2024-04-26 14:25:02.816666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.439 [2024-04-26 14:25:02.816696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.439 [2024-04-26 14:25:02.816714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.439 [2024-04-26 14:25:02.816982] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.439 [2024-04-26 14:25:02.817247] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.439 [2024-04-26 14:25:02.817269] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.439 [2024-04-26 14:25:02.817284] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.439 [2024-04-26 14:25:02.821319] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.439 [2024-04-26 14:25:02.830404] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.439 [2024-04-26 14:25:02.830953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.439 [2024-04-26 14:25:02.831159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.439 [2024-04-26 14:25:02.831213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.439 [2024-04-26 14:25:02.831231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.439 [2024-04-26 14:25:02.831498] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.439 [2024-04-26 14:25:02.831778] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.439 [2024-04-26 14:25:02.831800] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.439 [2024-04-26 14:25:02.831824] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.439 [2024-04-26 14:25:02.835875] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.439 [2024-04-26 14:25:02.844997] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.439 [2024-04-26 14:25:02.845469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.439 [2024-04-26 14:25:02.845653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.439 [2024-04-26 14:25:02.845680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.439 [2024-04-26 14:25:02.845698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.439 [2024-04-26 14:25:02.845960] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.439 [2024-04-26 14:25:02.846224] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.439 [2024-04-26 14:25:02.846245] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.440 [2024-04-26 14:25:02.846260] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.440 [2024-04-26 14:25:02.850273] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.440 [2024-04-26 14:25:02.859574] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.440 [2024-04-26 14:25:02.860035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.440 [2024-04-26 14:25:02.860176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.440 [2024-04-26 14:25:02.860202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.440 [2024-04-26 14:25:02.860219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.440 [2024-04-26 14:25:02.860492] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.440 [2024-04-26 14:25:02.860767] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.440 [2024-04-26 14:25:02.860790] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.440 [2024-04-26 14:25:02.860805] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.440 [2024-04-26 14:25:02.864849] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.440 [2024-04-26 14:25:02.873952] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.440 [2024-04-26 14:25:02.874466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.440 [2024-04-26 14:25:02.874623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.440 [2024-04-26 14:25:02.874658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.440 [2024-04-26 14:25:02.874676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.440 [2024-04-26 14:25:02.874937] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.440 [2024-04-26 14:25:02.875201] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.440 [2024-04-26 14:25:02.875222] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.440 [2024-04-26 14:25:02.875236] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.440 [2024-04-26 14:25:02.879251] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.440 [2024-04-26 14:25:02.888287] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.440 [2024-04-26 14:25:02.888828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.440 [2024-04-26 14:25:02.889055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.440 [2024-04-26 14:25:02.889105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.440 [2024-04-26 14:25:02.889123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.440 [2024-04-26 14:25:02.889390] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.440 [2024-04-26 14:25:02.889674] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.440 [2024-04-26 14:25:02.889697] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.440 [2024-04-26 14:25:02.889711] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.440 [2024-04-26 14:25:02.893748] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.440 [2024-04-26 14:25:02.902840] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.440 [2024-04-26 14:25:02.903359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.440 [2024-04-26 14:25:02.903569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.440 [2024-04-26 14:25:02.903619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.440 [2024-04-26 14:25:02.903649] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.440 [2024-04-26 14:25:02.903919] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.440 [2024-04-26 14:25:02.904184] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.440 [2024-04-26 14:25:02.904206] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.440 [2024-04-26 14:25:02.904221] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.440 [2024-04-26 14:25:02.908248] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.440 [2024-04-26 14:25:02.917280] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.440 [2024-04-26 14:25:02.917817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.440 [2024-04-26 14:25:02.917992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.440 [2024-04-26 14:25:02.918020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.440 [2024-04-26 14:25:02.918038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.440 [2024-04-26 14:25:02.918306] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.440 [2024-04-26 14:25:02.918570] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.440 [2024-04-26 14:25:02.918591] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.440 [2024-04-26 14:25:02.918606] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.440 [2024-04-26 14:25:02.922694] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.440 [2024-04-26 14:25:02.931775] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.440 [2024-04-26 14:25:02.932278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.440 [2024-04-26 14:25:02.932460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.440 [2024-04-26 14:25:02.932518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.440 [2024-04-26 14:25:02.932535] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.440 [2024-04-26 14:25:02.932808] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.440 [2024-04-26 14:25:02.933073] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.440 [2024-04-26 14:25:02.933094] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.440 [2024-04-26 14:25:02.933109] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.440 [2024-04-26 14:25:02.937161] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.440 [2024-04-26 14:25:02.946186] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.440 [2024-04-26 14:25:02.946686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.440 [2024-04-26 14:25:02.946888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.440 [2024-04-26 14:25:02.946937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.440 [2024-04-26 14:25:02.946954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.440 [2024-04-26 14:25:02.947215] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.440 [2024-04-26 14:25:02.947478] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.440 [2024-04-26 14:25:02.947499] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.440 [2024-04-26 14:25:02.947514] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.440 [2024-04-26 14:25:02.951573] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.440 [2024-04-26 14:25:02.960639] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.440 [2024-04-26 14:25:02.961173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.440 [2024-04-26 14:25:02.961308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.440 [2024-04-26 14:25:02.961334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.440 [2024-04-26 14:25:02.961350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.440 [2024-04-26 14:25:02.961611] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.440 [2024-04-26 14:25:02.961885] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.440 [2024-04-26 14:25:02.961907] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.440 [2024-04-26 14:25:02.961922] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.440 [2024-04-26 14:25:02.965972] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.440 [2024-04-26 14:25:02.975031] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.440 [2024-04-26 14:25:02.975536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.440 [2024-04-26 14:25:02.975714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.441 [2024-04-26 14:25:02.975744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.441 [2024-04-26 14:25:02.975761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.441 [2024-04-26 14:25:02.976029] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.441 [2024-04-26 14:25:02.976294] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.441 [2024-04-26 14:25:02.976315] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.441 [2024-04-26 14:25:02.976330] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.441 [2024-04-26 14:25:02.980376] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.441 [2024-04-26 14:25:02.989458] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.441 [2024-04-26 14:25:02.989964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.441 [2024-04-26 14:25:02.990174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.441 [2024-04-26 14:25:02.990228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.441 [2024-04-26 14:25:02.990246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.441 [2024-04-26 14:25:02.990513] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.441 [2024-04-26 14:25:02.990790] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.441 [2024-04-26 14:25:02.990813] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.441 [2024-04-26 14:25:02.990828] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.441 [2024-04-26 14:25:02.994900] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.441 [2024-04-26 14:25:03.004077] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.441 [2024-04-26 14:25:03.004584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.441 [2024-04-26 14:25:03.004756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.441 [2024-04-26 14:25:03.004786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.441 [2024-04-26 14:25:03.004805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.441 [2024-04-26 14:25:03.005083] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.441 [2024-04-26 14:25:03.005350] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.441 [2024-04-26 14:25:03.005372] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.441 [2024-04-26 14:25:03.005387] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.701 [2024-04-26 14:25:03.009493] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.701 [2024-04-26 14:25:03.018657] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.701 [2024-04-26 14:25:03.019235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.701 [2024-04-26 14:25:03.019395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.701 [2024-04-26 14:25:03.019424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.701 [2024-04-26 14:25:03.019443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.701 [2024-04-26 14:25:03.019727] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.701 [2024-04-26 14:25:03.019994] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.701 [2024-04-26 14:25:03.020015] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.701 [2024-04-26 14:25:03.020030] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.701 [2024-04-26 14:25:03.024137] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.701 [2024-04-26 14:25:03.033233] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.701 [2024-04-26 14:25:03.033715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.701 [2024-04-26 14:25:03.033933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.701 [2024-04-26 14:25:03.033983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.701 [2024-04-26 14:25:03.034001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.701 [2024-04-26 14:25:03.034268] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.701 [2024-04-26 14:25:03.034533] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.701 [2024-04-26 14:25:03.034555] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.701 [2024-04-26 14:25:03.034570] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.701 [2024-04-26 14:25:03.038609] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.701 [2024-04-26 14:25:03.047674] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.701 [2024-04-26 14:25:03.048180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.701 [2024-04-26 14:25:03.048338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.701 [2024-04-26 14:25:03.048368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.701 [2024-04-26 14:25:03.048386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.701 [2024-04-26 14:25:03.048666] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.701 [2024-04-26 14:25:03.048932] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.701 [2024-04-26 14:25:03.048954] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.701 [2024-04-26 14:25:03.048969] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.701 [2024-04-26 14:25:03.053055] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.701 [2024-04-26 14:25:03.062175] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.701 [2024-04-26 14:25:03.062732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.701 [2024-04-26 14:25:03.062961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.701 [2024-04-26 14:25:03.063011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.701 [2024-04-26 14:25:03.063037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.701 [2024-04-26 14:25:03.063311] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.701 [2024-04-26 14:25:03.063576] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.701 [2024-04-26 14:25:03.063597] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.701 [2024-04-26 14:25:03.063613] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.701 [2024-04-26 14:25:03.067683] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.701 [2024-04-26 14:25:03.076772] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.701 [2024-04-26 14:25:03.077309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.701 [2024-04-26 14:25:03.077473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.701 [2024-04-26 14:25:03.077501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.701 [2024-04-26 14:25:03.077519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.701 [2024-04-26 14:25:03.077799] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.701 [2024-04-26 14:25:03.078065] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.701 [2024-04-26 14:25:03.078087] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.702 [2024-04-26 14:25:03.078101] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.702 [2024-04-26 14:25:03.082136] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.702 [2024-04-26 14:25:03.091152] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.702 [2024-04-26 14:25:03.091706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.702 [2024-04-26 14:25:03.091840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.702 [2024-04-26 14:25:03.091866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.702 [2024-04-26 14:25:03.091883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.702 [2024-04-26 14:25:03.092145] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.702 [2024-04-26 14:25:03.092408] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.702 [2024-04-26 14:25:03.092429] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.702 [2024-04-26 14:25:03.092444] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.702 [2024-04-26 14:25:03.096467] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.702 [2024-04-26 14:25:03.105448] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.702 [2024-04-26 14:25:03.105942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.702 [2024-04-26 14:25:03.106096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.702 [2024-04-26 14:25:03.106125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.702 [2024-04-26 14:25:03.106143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.702 [2024-04-26 14:25:03.106417] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.702 [2024-04-26 14:25:03.106695] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.702 [2024-04-26 14:25:03.106719] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.702 [2024-04-26 14:25:03.106735] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.702 [2024-04-26 14:25:03.110742] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.702 [2024-04-26 14:25:03.119963] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.702 [2024-04-26 14:25:03.120426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.702 [2024-04-26 14:25:03.120718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.702 [2024-04-26 14:25:03.120748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.702 [2024-04-26 14:25:03.120766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.702 [2024-04-26 14:25:03.121034] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.702 [2024-04-26 14:25:03.121299] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.702 [2024-04-26 14:25:03.121321] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.702 [2024-04-26 14:25:03.121336] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.702 [2024-04-26 14:25:03.125365] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.702 [2024-04-26 14:25:03.134352] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.702 [2024-04-26 14:25:03.134811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.702 [2024-04-26 14:25:03.135030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.702 [2024-04-26 14:25:03.135078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.702 [2024-04-26 14:25:03.135096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.702 [2024-04-26 14:25:03.135357] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.702 [2024-04-26 14:25:03.135621] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.702 [2024-04-26 14:25:03.135651] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.702 [2024-04-26 14:25:03.135667] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.702 [2024-04-26 14:25:03.139666] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.702 [2024-04-26 14:25:03.148688] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.702 [2024-04-26 14:25:03.149108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.702 [2024-04-26 14:25:03.149276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.702 [2024-04-26 14:25:03.149303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.702 [2024-04-26 14:25:03.149321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.702 [2024-04-26 14:25:03.149584] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.702 [2024-04-26 14:25:03.149869] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.702 [2024-04-26 14:25:03.149892] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.702 [2024-04-26 14:25:03.149908] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.702 [2024-04-26 14:25:03.153922] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.702 [2024-04-26 14:25:03.163148] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.702 [2024-04-26 14:25:03.163628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.702 [2024-04-26 14:25:03.163945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.702 [2024-04-26 14:25:03.163971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.702 [2024-04-26 14:25:03.163988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.702 [2024-04-26 14:25:03.164250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.702 [2024-04-26 14:25:03.164521] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.702 [2024-04-26 14:25:03.164542] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.702 [2024-04-26 14:25:03.164557] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.702 [2024-04-26 14:25:03.168563] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.702 [2024-04-26 14:25:03.177546] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.702 [2024-04-26 14:25:03.178024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.702 [2024-04-26 14:25:03.178187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.702 [2024-04-26 14:25:03.178213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.702 [2024-04-26 14:25:03.178229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.702 [2024-04-26 14:25:03.178490] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.702 [2024-04-26 14:25:03.178766] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.702 [2024-04-26 14:25:03.178788] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.702 [2024-04-26 14:25:03.178803] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.702 [2024-04-26 14:25:03.182810] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.702 [2024-04-26 14:25:03.192021] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.702 [2024-04-26 14:25:03.192414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.702 [2024-04-26 14:25:03.192576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.702 [2024-04-26 14:25:03.192602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.702 [2024-04-26 14:25:03.192619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.702 [2024-04-26 14:25:03.192888] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.702 [2024-04-26 14:25:03.193152] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.702 [2024-04-26 14:25:03.193180] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.702 [2024-04-26 14:25:03.193195] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.702 [2024-04-26 14:25:03.197199] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.702 [2024-04-26 14:25:03.206418] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.702 [2024-04-26 14:25:03.206877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.702 [2024-04-26 14:25:03.207180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.703 [2024-04-26 14:25:03.207206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.703 [2024-04-26 14:25:03.207222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.703 [2024-04-26 14:25:03.207483] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.703 [2024-04-26 14:25:03.207757] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.703 [2024-04-26 14:25:03.207779] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.703 [2024-04-26 14:25:03.207794] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.703 [2024-04-26 14:25:03.211818] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.703 [2024-04-26 14:25:03.220798] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.703 [2024-04-26 14:25:03.221289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.703 [2024-04-26 14:25:03.221453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.703 [2024-04-26 14:25:03.221483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.703 [2024-04-26 14:25:03.221500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.703 [2024-04-26 14:25:03.221780] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.703 [2024-04-26 14:25:03.222046] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.703 [2024-04-26 14:25:03.222068] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.703 [2024-04-26 14:25:03.222083] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.703 [2024-04-26 14:25:03.226084] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.703 [2024-04-26 14:25:03.235290] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.703 [2024-04-26 14:25:03.235747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.703 [2024-04-26 14:25:03.235994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.703 [2024-04-26 14:25:03.236044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.703 [2024-04-26 14:25:03.236061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.703 [2024-04-26 14:25:03.236321] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.703 [2024-04-26 14:25:03.236585] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.703 [2024-04-26 14:25:03.236607] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.703 [2024-04-26 14:25:03.236629] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.703 [2024-04-26 14:25:03.240643] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.703 [2024-04-26 14:25:03.249809] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.703 [2024-04-26 14:25:03.250186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.703 [2024-04-26 14:25:03.250476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.703 [2024-04-26 14:25:03.250504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.703 [2024-04-26 14:25:03.250521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.703 [2024-04-26 14:25:03.250791] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.703 [2024-04-26 14:25:03.251055] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.703 [2024-04-26 14:25:03.251076] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.703 [2024-04-26 14:25:03.251091] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.703 [2024-04-26 14:25:03.255102] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.703 [2024-04-26 14:25:03.264431] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.703 [2024-04-26 14:25:03.265016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.703 [2024-04-26 14:25:03.265196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.703 [2024-04-26 14:25:03.265253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.703 [2024-04-26 14:25:03.265282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.703 [2024-04-26 14:25:03.265551] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.703 [2024-04-26 14:25:03.265828] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.703 [2024-04-26 14:25:03.265851] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.703 [2024-04-26 14:25:03.265866] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.962 [2024-04-26 14:25:03.270001] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.962 [2024-04-26 14:25:03.278829] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.962 [2024-04-26 14:25:03.279337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.962 [2024-04-26 14:25:03.279496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.962 [2024-04-26 14:25:03.279526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.962 [2024-04-26 14:25:03.279544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.962 [2024-04-26 14:25:03.279823] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.962 [2024-04-26 14:25:03.280089] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.962 [2024-04-26 14:25:03.280110] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.962 [2024-04-26 14:25:03.280127] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.962 [2024-04-26 14:25:03.284147] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.962 [2024-04-26 14:25:03.293114] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.962 [2024-04-26 14:25:03.293522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.962 [2024-04-26 14:25:03.293683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.962 [2024-04-26 14:25:03.293710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.962 [2024-04-26 14:25:03.293727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.962 [2024-04-26 14:25:03.293988] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.962 [2024-04-26 14:25:03.294252] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.962 [2024-04-26 14:25:03.294273] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.962 [2024-04-26 14:25:03.294288] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.962 [2024-04-26 14:25:03.298285] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.962 [2024-04-26 14:25:03.307488] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.962 [2024-04-26 14:25:03.308010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.962 [2024-04-26 14:25:03.308168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.962 [2024-04-26 14:25:03.308197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.962 [2024-04-26 14:25:03.308215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.962 [2024-04-26 14:25:03.308482] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.962 [2024-04-26 14:25:03.308760] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.962 [2024-04-26 14:25:03.308782] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.962 [2024-04-26 14:25:03.308798] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.962 [2024-04-26 14:25:03.312798] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.962 [2024-04-26 14:25:03.322027] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.962 [2024-04-26 14:25:03.322468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.962 [2024-04-26 14:25:03.322682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.962 [2024-04-26 14:25:03.322732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.962 [2024-04-26 14:25:03.322749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.962 [2024-04-26 14:25:03.323011] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.962 [2024-04-26 14:25:03.323274] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.962 [2024-04-26 14:25:03.323295] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.962 [2024-04-26 14:25:03.323311] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.962 [2024-04-26 14:25:03.327325] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.962 [2024-04-26 14:25:03.336308] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.962 [2024-04-26 14:25:03.336828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.962 [2024-04-26 14:25:03.336996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.962 [2024-04-26 14:25:03.337024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.962 [2024-04-26 14:25:03.337042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.962 [2024-04-26 14:25:03.337309] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.962 [2024-04-26 14:25:03.337574] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.962 [2024-04-26 14:25:03.337594] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.962 [2024-04-26 14:25:03.337610] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.962 [2024-04-26 14:25:03.341626] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.962 [2024-04-26 14:25:03.350649] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.962 [2024-04-26 14:25:03.351152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.962 [2024-04-26 14:25:03.351391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.962 [2024-04-26 14:25:03.351418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.962 [2024-04-26 14:25:03.351436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.962 [2024-04-26 14:25:03.351714] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.962 [2024-04-26 14:25:03.351986] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.962 [2024-04-26 14:25:03.352007] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.962 [2024-04-26 14:25:03.352023] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.962 [2024-04-26 14:25:03.356036] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.962 [2024-04-26 14:25:03.365004] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.962 [2024-04-26 14:25:03.365456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.962 [2024-04-26 14:25:03.365615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.365652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.963 [2024-04-26 14:25:03.365671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.963 [2024-04-26 14:25:03.365945] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.963 [2024-04-26 14:25:03.366211] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.963 [2024-04-26 14:25:03.366233] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.963 [2024-04-26 14:25:03.366249] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.963 [2024-04-26 14:25:03.370246] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.963 [2024-04-26 14:25:03.379455] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.963 [2024-04-26 14:25:03.379903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.380049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.380078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.963 [2024-04-26 14:25:03.380095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.963 [2024-04-26 14:25:03.380357] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.963 [2024-04-26 14:25:03.380620] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.963 [2024-04-26 14:25:03.380650] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.963 [2024-04-26 14:25:03.380666] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.963 [2024-04-26 14:25:03.384665] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.963 [2024-04-26 14:25:03.393869] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.963 [2024-04-26 14:25:03.394278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.394415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.394441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.963 [2024-04-26 14:25:03.394458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.963 [2024-04-26 14:25:03.394727] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.963 [2024-04-26 14:25:03.394991] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.963 [2024-04-26 14:25:03.395012] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.963 [2024-04-26 14:25:03.395027] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.963 [2024-04-26 14:25:03.399026] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.963 [2024-04-26 14:25:03.408244] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.963 [2024-04-26 14:25:03.408709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.408872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.408898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.963 [2024-04-26 14:25:03.408914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.963 [2024-04-26 14:25:03.409175] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.963 [2024-04-26 14:25:03.409438] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.963 [2024-04-26 14:25:03.409460] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.963 [2024-04-26 14:25:03.409475] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.963 [2024-04-26 14:25:03.413489] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.963 [2024-04-26 14:25:03.422698] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.963 [2024-04-26 14:25:03.423234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.423353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.423381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.963 [2024-04-26 14:25:03.423398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.963 [2024-04-26 14:25:03.423670] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.963 [2024-04-26 14:25:03.423934] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.963 [2024-04-26 14:25:03.423956] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.963 [2024-04-26 14:25:03.423971] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.963 [2024-04-26 14:25:03.427996] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.963 [2024-04-26 14:25:03.436994] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.963 [2024-04-26 14:25:03.437518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.437702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.437732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.963 [2024-04-26 14:25:03.437750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.963 [2024-04-26 14:25:03.438017] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.963 [2024-04-26 14:25:03.438282] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.963 [2024-04-26 14:25:03.438304] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.963 [2024-04-26 14:25:03.438319] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.963 [2024-04-26 14:25:03.442330] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.963 [2024-04-26 14:25:03.451309] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.963 [2024-04-26 14:25:03.451744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.451927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.451955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.963 [2024-04-26 14:25:03.451972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.963 [2024-04-26 14:25:03.452245] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.963 [2024-04-26 14:25:03.452511] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.963 [2024-04-26 14:25:03.452533] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.963 [2024-04-26 14:25:03.452548] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.963 [2024-04-26 14:25:03.456567] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.963 [2024-04-26 14:25:03.465790] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.963 [2024-04-26 14:25:03.466356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.466538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.466566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.963 [2024-04-26 14:25:03.466591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.963 [2024-04-26 14:25:03.466872] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.963 [2024-04-26 14:25:03.467138] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.963 [2024-04-26 14:25:03.467160] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.963 [2024-04-26 14:25:03.467174] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.963 [2024-04-26 14:25:03.471261] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.963 [2024-04-26 14:25:03.480248] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.963 [2024-04-26 14:25:03.480759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.480918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.480945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.963 [2024-04-26 14:25:03.480963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.963 [2024-04-26 14:25:03.481231] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.963 [2024-04-26 14:25:03.481503] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.963 [2024-04-26 14:25:03.481524] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.963 [2024-04-26 14:25:03.481539] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.963 [2024-04-26 14:25:03.485559] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.963 [2024-04-26 14:25:03.494555] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.963 [2024-04-26 14:25:03.495056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.495262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.495310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.963 [2024-04-26 14:25:03.495327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.963 [2024-04-26 14:25:03.495588] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.963 [2024-04-26 14:25:03.495879] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.963 [2024-04-26 14:25:03.495904] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.963 [2024-04-26 14:25:03.495920] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.963 [2024-04-26 14:25:03.500043] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.963 [2024-04-26 14:25:03.509067] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.963 [2024-04-26 14:25:03.509583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.509837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.509890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.963 [2024-04-26 14:25:03.509908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.963 [2024-04-26 14:25:03.510184] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.963 [2024-04-26 14:25:03.510450] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.963 [2024-04-26 14:25:03.510472] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.963 [2024-04-26 14:25:03.510487] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.963 [2024-04-26 14:25:03.514488] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.963 [2024-04-26 14:25:03.523470] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.963 [2024-04-26 14:25:03.523989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.524153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.963 [2024-04-26 14:25:03.524181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:21.963 [2024-04-26 14:25:03.524198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:21.963 [2024-04-26 14:25:03.524466] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:21.963 [2024-04-26 14:25:03.524743] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.963 [2024-04-26 14:25:03.524766] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.963 [2024-04-26 14:25:03.524782] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.963 [2024-04-26 14:25:03.528883] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.224 [2024-04-26 14:25:03.538021] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.224 [2024-04-26 14:25:03.538547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.224 [2024-04-26 14:25:03.538787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.224 [2024-04-26 14:25:03.538839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.224 [2024-04-26 14:25:03.538857] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.224 [2024-04-26 14:25:03.539125] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.224 [2024-04-26 14:25:03.539389] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.224 [2024-04-26 14:25:03.539410] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.224 [2024-04-26 14:25:03.539425] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.224 [2024-04-26 14:25:03.543465] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.224 [2024-04-26 14:25:03.552509] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.224 [2024-04-26 14:25:03.553079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.224 [2024-04-26 14:25:03.553245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.224 [2024-04-26 14:25:03.553275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.224 [2024-04-26 14:25:03.553293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.224 [2024-04-26 14:25:03.553566] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.224 [2024-04-26 14:25:03.553843] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.224 [2024-04-26 14:25:03.553866] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.224 [2024-04-26 14:25:03.553882] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.224 [2024-04-26 14:25:03.557889] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.224 [2024-04-26 14:25:03.566910] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.224 [2024-04-26 14:25:03.567443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.224 [2024-04-26 14:25:03.567626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.224 [2024-04-26 14:25:03.567665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.224 [2024-04-26 14:25:03.567683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.224 [2024-04-26 14:25:03.567951] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.224 [2024-04-26 14:25:03.568215] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.224 [2024-04-26 14:25:03.568237] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.224 [2024-04-26 14:25:03.568252] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.224 [2024-04-26 14:25:03.572268] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.224 [2024-04-26 14:25:03.581260] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.224 [2024-04-26 14:25:03.581739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.224 [2024-04-26 14:25:03.581980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.224 [2024-04-26 14:25:03.582028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.224 [2024-04-26 14:25:03.582046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.224 [2024-04-26 14:25:03.582313] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.224 [2024-04-26 14:25:03.582578] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.224 [2024-04-26 14:25:03.582600] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.224 [2024-04-26 14:25:03.582615] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.224 [2024-04-26 14:25:03.586650] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.224 [2024-04-26 14:25:03.595665] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.224 [2024-04-26 14:25:03.596225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.224 [2024-04-26 14:25:03.596393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.224 [2024-04-26 14:25:03.596421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.224 [2024-04-26 14:25:03.596439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.224 [2024-04-26 14:25:03.596719] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.224 [2024-04-26 14:25:03.596991] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.224 [2024-04-26 14:25:03.597012] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.224 [2024-04-26 14:25:03.597027] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.224 [2024-04-26 14:25:03.601056] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.224 [2024-04-26 14:25:03.610038] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.224 [2024-04-26 14:25:03.610556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.224 [2024-04-26 14:25:03.610790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.224 [2024-04-26 14:25:03.610820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.224 [2024-04-26 14:25:03.610837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.224 [2024-04-26 14:25:03.611105] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.224 [2024-04-26 14:25:03.611370] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.224 [2024-04-26 14:25:03.611392] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.224 [2024-04-26 14:25:03.611407] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.224 [2024-04-26 14:25:03.615434] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.224 [2024-04-26 14:25:03.624400] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.224 [2024-04-26 14:25:03.624987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.224 [2024-04-26 14:25:03.625142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.224 [2024-04-26 14:25:03.625170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.224 [2024-04-26 14:25:03.625188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.224 [2024-04-26 14:25:03.625455] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.224 [2024-04-26 14:25:03.625735] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.224 [2024-04-26 14:25:03.625757] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.224 [2024-04-26 14:25:03.625772] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.224 [2024-04-26 14:25:03.629772] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.224 [2024-04-26 14:25:03.638762] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.224 [2024-04-26 14:25:03.639209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.224 [2024-04-26 14:25:03.639363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.225 [2024-04-26 14:25:03.639392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.225 [2024-04-26 14:25:03.639410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.225 [2024-04-26 14:25:03.639690] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.225 [2024-04-26 14:25:03.639956] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.225 [2024-04-26 14:25:03.639978] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.225 [2024-04-26 14:25:03.639999] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.225 [2024-04-26 14:25:03.643998] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.225 [2024-04-26 14:25:03.653209] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.225 [2024-04-26 14:25:03.653813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.225 [2024-04-26 14:25:03.653974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.225 [2024-04-26 14:25:03.654003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.225 [2024-04-26 14:25:03.654021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.225 [2024-04-26 14:25:03.654289] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.225 [2024-04-26 14:25:03.654554] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.225 [2024-04-26 14:25:03.654575] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.225 [2024-04-26 14:25:03.654590] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.225 [2024-04-26 14:25:03.658609] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.225 [2024-04-26 14:25:03.667603] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.225 [2024-04-26 14:25:03.668133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.225 [2024-04-26 14:25:03.668358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.225 [2024-04-26 14:25:03.668408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.225 [2024-04-26 14:25:03.668425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.225 [2024-04-26 14:25:03.668706] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.225 [2024-04-26 14:25:03.668972] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.225 [2024-04-26 14:25:03.668993] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.225 [2024-04-26 14:25:03.669008] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.225 [2024-04-26 14:25:03.673006] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.225 [2024-04-26 14:25:03.682029] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.225 [2024-04-26 14:25:03.682526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.225 [2024-04-26 14:25:03.682724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.225 [2024-04-26 14:25:03.682786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.225 [2024-04-26 14:25:03.682804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.225 [2024-04-26 14:25:03.683066] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.225 [2024-04-26 14:25:03.683329] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.225 [2024-04-26 14:25:03.683350] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.225 [2024-04-26 14:25:03.683372] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.225 [2024-04-26 14:25:03.687371] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.225 [2024-04-26 14:25:03.696359] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.225 [2024-04-26 14:25:03.696862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.225 [2024-04-26 14:25:03.697096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.225 [2024-04-26 14:25:03.697146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.225 [2024-04-26 14:25:03.697164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.225 [2024-04-26 14:25:03.697431] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.225 [2024-04-26 14:25:03.697710] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.225 [2024-04-26 14:25:03.697733] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.225 [2024-04-26 14:25:03.697748] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.225 [2024-04-26 14:25:03.701766] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3210369 Killed "${NVMF_APP[@]}" "$@" 00:20:22.225 14:25:03 -- host/bdevperf.sh@36 -- # tgt_init 00:20:22.225 14:25:03 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:20:22.225 14:25:03 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:22.225 14:25:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:22.225 14:25:03 -- common/autotest_common.sh@10 -- # set +x 00:20:22.225 [2024-04-26 14:25:03.710810] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.225 [2024-04-26 14:25:03.711292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.225 [2024-04-26 14:25:03.711447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.225 [2024-04-26 14:25:03.711475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.225 [2024-04-26 14:25:03.711493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.225 14:25:03 -- nvmf/common.sh@470 -- # nvmfpid=3211102 00:20:22.225 14:25:03 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:22.225 14:25:03 -- nvmf/common.sh@471 -- # waitforlisten 3211102 00:20:22.225 [2024-04-26 14:25:03.711773] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.225 14:25:03 -- common/autotest_common.sh@817 -- # '[' -z 3211102 ']' 00:20:22.225 14:25:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.225 [2024-04-26 14:25:03.712040] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.225 [2024-04-26 14:25:03.712062] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.225 [2024-04-26 14:25:03.712077] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.225 14:25:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:22.225 14:25:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.225 14:25:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:22.225 14:25:03 -- common/autotest_common.sh@10 -- # set +x 00:20:22.225 [2024-04-26 14:25:03.716081] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.225 [2024-04-26 14:25:03.725302] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.225 [2024-04-26 14:25:03.725747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.225 [2024-04-26 14:25:03.725883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.225 [2024-04-26 14:25:03.725911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.225 [2024-04-26 14:25:03.725929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.225 [2024-04-26 14:25:03.726201] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.225 [2024-04-26 14:25:03.726467] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.225 [2024-04-26 14:25:03.726489] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.225 [2024-04-26 14:25:03.726504] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.225 [2024-04-26 14:25:03.730515] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.225 [2024-04-26 14:25:03.739729] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.225 [2024-04-26 14:25:03.740153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.225 [2024-04-26 14:25:03.740315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.225 [2024-04-26 14:25:03.740341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.225 [2024-04-26 14:25:03.740359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.225 [2024-04-26 14:25:03.740620] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.225 [2024-04-26 14:25:03.740894] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.225 [2024-04-26 14:25:03.740915] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.225 [2024-04-26 14:25:03.740930] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.225 [2024-04-26 14:25:03.744931] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.225 [2024-04-26 14:25:03.754084] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.225 [2024-04-26 14:25:03.754459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.225 [2024-04-26 14:25:03.754601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.226 [2024-04-26 14:25:03.754627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.226 [2024-04-26 14:25:03.754652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.226 [2024-04-26 14:25:03.754915] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.226 [2024-04-26 14:25:03.755179] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.226 [2024-04-26 14:25:03.755200] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.226 [2024-04-26 14:25:03.755215] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.226 [2024-04-26 14:25:03.759219] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.226 [2024-04-26 14:25:03.760881] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:20:22.226 [2024-04-26 14:25:03.760988] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.226 [2024-04-26 14:25:03.768415] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.226 [2024-04-26 14:25:03.768843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.226 [2024-04-26 14:25:03.768982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.226 [2024-04-26 14:25:03.769008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.226 [2024-04-26 14:25:03.769026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.226 [2024-04-26 14:25:03.769287] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.226 [2024-04-26 14:25:03.769551] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.226 [2024-04-26 14:25:03.769573] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.226 [2024-04-26 14:25:03.769589] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.226 [2024-04-26 14:25:03.773616] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.226 [2024-04-26 14:25:03.782830] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.226 [2024-04-26 14:25:03.783260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.226 [2024-04-26 14:25:03.783446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.226 [2024-04-26 14:25:03.783475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.226 [2024-04-26 14:25:03.783493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.226 [2024-04-26 14:25:03.783774] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.226 [2024-04-26 14:25:03.784041] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.226 [2024-04-26 14:25:03.784063] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.226 [2024-04-26 14:25:03.784078] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.226 [2024-04-26 14:25:03.788135] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.485 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.485 [2024-04-26 14:25:03.797317] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.485 [2024-04-26 14:25:03.797752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.485 [2024-04-26 14:25:03.797897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.485 [2024-04-26 14:25:03.797925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.485 [2024-04-26 14:25:03.797943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.485 [2024-04-26 14:25:03.798206] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.485 [2024-04-26 14:25:03.798470] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.485 [2024-04-26 14:25:03.798492] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.485 [2024-04-26 14:25:03.798507] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.485 [2024-04-26 14:25:03.802524] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.485 [2024-04-26 14:25:03.811734] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.485 [2024-04-26 14:25:03.812146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.486 [2024-04-26 14:25:03.812290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.486 [2024-04-26 14:25:03.812316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.486 [2024-04-26 14:25:03.812333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.486 [2024-04-26 14:25:03.812595] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.486 [2024-04-26 14:25:03.813092] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.486 [2024-04-26 14:25:03.813115] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.486 [2024-04-26 14:25:03.813130] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.486 [2024-04-26 14:25:03.817132] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.486 [2024-04-26 14:25:03.826102] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.486 [2024-04-26 14:25:03.826501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.486 [2024-04-26 14:25:03.826644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.486 [2024-04-26 14:25:03.826671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.486 [2024-04-26 14:25:03.826688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.486 [2024-04-26 14:25:03.826951] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.486 [2024-04-26 14:25:03.827214] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.486 [2024-04-26 14:25:03.827236] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.486 [2024-04-26 14:25:03.827250] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.486 [2024-04-26 14:25:03.830692] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:22.486 [2024-04-26 14:25:03.831249] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.486 [2024-04-26 14:25:03.840553] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.486 [2024-04-26 14:25:03.841143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.486 [2024-04-26 14:25:03.841322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.486 [2024-04-26 14:25:03.841349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.486 [2024-04-26 14:25:03.841368] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.486 [2024-04-26 14:25:03.841649] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.486 [2024-04-26 14:25:03.841929] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.486 [2024-04-26 14:25:03.841951] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.486 [2024-04-26 14:25:03.841969] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.486 [2024-04-26 14:25:03.846013] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.486 [2024-04-26 14:25:03.855041] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.486 [2024-04-26 14:25:03.855540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.486 [2024-04-26 14:25:03.855704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.486 [2024-04-26 14:25:03.855732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.486 [2024-04-26 14:25:03.855751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.486 [2024-04-26 14:25:03.856021] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.486 [2024-04-26 14:25:03.856289] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.486 [2024-04-26 14:25:03.856311] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.486 [2024-04-26 14:25:03.856328] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.486 [2024-04-26 14:25:03.860338] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.486 [2024-04-26 14:25:03.869556] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.486 [2024-04-26 14:25:03.870031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.486 [2024-04-26 14:25:03.870167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.486 [2024-04-26 14:25:03.870196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.486 [2024-04-26 14:25:03.870215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.486 [2024-04-26 14:25:03.870489] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.486 [2024-04-26 14:25:03.870766] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.486 [2024-04-26 14:25:03.870790] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.486 [2024-04-26 14:25:03.870807] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.486 [2024-04-26 14:25:03.874809] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.486 [2024-04-26 14:25:03.884027] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.486 [2024-04-26 14:25:03.884492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.486 [2024-04-26 14:25:03.884714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.486 [2024-04-26 14:25:03.884751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.486 [2024-04-26 14:25:03.884770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.486 [2024-04-26 14:25:03.885038] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.486 [2024-04-26 14:25:03.885305] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.486 [2024-04-26 14:25:03.885326] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.486 [2024-04-26 14:25:03.885342] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.486 [2024-04-26 14:25:03.889345] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.486 [2024-04-26 14:25:03.898395] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.486 [2024-04-26 14:25:03.898994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.486 [2024-04-26 14:25:03.899176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.486 [2024-04-26 14:25:03.899204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.486 [2024-04-26 14:25:03.899224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.486 [2024-04-26 14:25:03.899498] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.486 [2024-04-26 14:25:03.899779] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.486 [2024-04-26 14:25:03.899802] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.486 [2024-04-26 14:25:03.899819] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.486 [2024-04-26 14:25:03.903863] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.486 [2024-04-26 14:25:03.912869] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.486 [2024-04-26 14:25:03.913417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.486 [2024-04-26 14:25:03.913596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.486 [2024-04-26 14:25:03.913628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.486 [2024-04-26 14:25:03.913660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.486 [2024-04-26 14:25:03.913937] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.487 [2024-04-26 14:25:03.914211] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.487 [2024-04-26 14:25:03.914233] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.487 [2024-04-26 14:25:03.914250] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.487 [2024-04-26 14:25:03.918256] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.487 [2024-04-26 14:25:03.927232] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.487 [2024-04-26 14:25:03.927677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.487 [2024-04-26 14:25:03.927842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.487 [2024-04-26 14:25:03.927869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.487 [2024-04-26 14:25:03.927887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.487 [2024-04-26 14:25:03.928153] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.487 [2024-04-26 14:25:03.928418] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.487 [2024-04-26 14:25:03.928440] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.487 [2024-04-26 14:25:03.928455] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.487 [2024-04-26 14:25:03.932461] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.487 [2024-04-26 14:25:03.941682] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.487 [2024-04-26 14:25:03.942175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.487 [2024-04-26 14:25:03.942374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.487 [2024-04-26 14:25:03.942404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.487 [2024-04-26 14:25:03.942422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.487 [2024-04-26 14:25:03.942711] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.487 [2024-04-26 14:25:03.942979] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.487 [2024-04-26 14:25:03.943000] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.487 [2024-04-26 14:25:03.943017] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.487 [2024-04-26 14:25:03.947023] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.487 [2024-04-26 14:25:03.949825] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.487 [2024-04-26 14:25:03.949864] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.487 [2024-04-26 14:25:03.949880] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.487 [2024-04-26 14:25:03.949893] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.487 [2024-04-26 14:25:03.949904] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.487 [2024-04-26 14:25:03.950140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.487 [2024-04-26 14:25:03.950192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.487 [2024-04-26 14:25:03.950196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.487 [2024-04-26 14:25:03.956023] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.487 [2024-04-26 14:25:03.956601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.487 [2024-04-26 14:25:03.956781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.487 [2024-04-26 14:25:03.956811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.487 [2024-04-26 14:25:03.956832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.487 [2024-04-26 14:25:03.957114] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.487 [2024-04-26 14:25:03.957386] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.487 [2024-04-26 14:25:03.957408] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.487 [2024-04-26 14:25:03.957425] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.487 [2024-04-26 14:25:03.961474] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.487 [2024-04-26 14:25:03.970560] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.487 [2024-04-26 14:25:03.971209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.487 [2024-04-26 14:25:03.971378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.487 [2024-04-26 14:25:03.971408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.487 [2024-04-26 14:25:03.971428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.487 [2024-04-26 14:25:03.971723] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.487 [2024-04-26 14:25:03.972008] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.487 [2024-04-26 14:25:03.972031] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.487 [2024-04-26 14:25:03.972048] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.487 [2024-04-26 14:25:03.976147] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.487 [2024-04-26 14:25:03.985020] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.487 [2024-04-26 14:25:03.985653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.487 [2024-04-26 14:25:03.985806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.487 [2024-04-26 14:25:03.985834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.487 [2024-04-26 14:25:03.985854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.487 [2024-04-26 14:25:03.986138] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.487 [2024-04-26 14:25:03.986410] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.487 [2024-04-26 14:25:03.986433] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.487 [2024-04-26 14:25:03.986450] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.487 [2024-04-26 14:25:03.990509] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.487 [2024-04-26 14:25:03.999577] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.487 [2024-04-26 14:25:04.000099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.487 [2024-04-26 14:25:04.000230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.487 [2024-04-26 14:25:04.000256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.487 [2024-04-26 14:25:04.000275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.487 [2024-04-26 14:25:04.000546] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.487 [2024-04-26 14:25:04.000821] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.487 [2024-04-26 14:25:04.000844] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.487 [2024-04-26 14:25:04.000862] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.487 [2024-04-26 14:25:04.004952] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.487 [2024-04-26 14:25:04.014148] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.487 [2024-04-26 14:25:04.014688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.487 [2024-04-26 14:25:04.014845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.487 [2024-04-26 14:25:04.014873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.487 [2024-04-26 14:25:04.014893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.487 [2024-04-26 14:25:04.015166] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.487 [2024-04-26 14:25:04.015437] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.487 [2024-04-26 14:25:04.015469] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.487 [2024-04-26 14:25:04.015487] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.487 [2024-04-26 14:25:04.019583] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.487 [2024-04-26 14:25:04.028619] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.488 [2024-04-26 14:25:04.029124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.488 [2024-04-26 14:25:04.029244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.488 [2024-04-26 14:25:04.029272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.488 [2024-04-26 14:25:04.029291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.488 [2024-04-26 14:25:04.029568] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.488 [2024-04-26 14:25:04.029845] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.488 [2024-04-26 14:25:04.029869] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.488 [2024-04-26 14:25:04.029886] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.488 [2024-04-26 14:25:04.033893] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.488 [2024-04-26 14:25:04.043112] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.488 [2024-04-26 14:25:04.043514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.488 [2024-04-26 14:25:04.043668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.488 [2024-04-26 14:25:04.043696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.488 [2024-04-26 14:25:04.043714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.488 [2024-04-26 14:25:04.043977] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.488 [2024-04-26 14:25:04.044241] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.488 [2024-04-26 14:25:04.044262] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.488 [2024-04-26 14:25:04.044278] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.488 [2024-04-26 14:25:04.048290] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.746 [2024-04-26 14:25:04.057479] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.746 [2024-04-26 14:25:04.057909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.746 [2024-04-26 14:25:04.058028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.746 [2024-04-26 14:25:04.058055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.746 [2024-04-26 14:25:04.058074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.746 [2024-04-26 14:25:04.058355] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.746 [2024-04-26 14:25:04.058645] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.746 [2024-04-26 14:25:04.058668] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.746 [2024-04-26 14:25:04.058692] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.746 14:25:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:22.746 14:25:04 -- common/autotest_common.sh@850 -- # return 0 00:20:22.746 14:25:04 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:22.747 14:25:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:22.747 14:25:04 -- common/autotest_common.sh@10 -- # set +x 00:20:22.747 [2024-04-26 14:25:04.062750] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.747 [2024-04-26 14:25:04.071952] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.747 [2024-04-26 14:25:04.072424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.747 [2024-04-26 14:25:04.072549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.747 [2024-04-26 14:25:04.072577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.747 [2024-04-26 14:25:04.072594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.747 [2024-04-26 14:25:04.072865] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.747 [2024-04-26 14:25:04.073130] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.747 [2024-04-26 14:25:04.073151] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.747 [2024-04-26 14:25:04.073167] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.747 [2024-04-26 14:25:04.077169] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.747 14:25:04 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.747 14:25:04 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:22.747 14:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.747 [2024-04-26 14:25:04.086373] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.747 14:25:04 -- common/autotest_common.sh@10 -- # set +x 00:20:22.747 [2024-04-26 14:25:04.086826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.747 [2024-04-26 14:25:04.086962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.747 [2024-04-26 14:25:04.086988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.747 [2024-04-26 14:25:04.087005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.747 [2024-04-26 14:25:04.087266] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.747 [2024-04-26 14:25:04.087530] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.747 [2024-04-26 14:25:04.087551] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.747 [2024-04-26 14:25:04.087566] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.747 [2024-04-26 14:25:04.091147] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.747 [2024-04-26 14:25:04.091575] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.747 14:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.747 14:25:04 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:22.747 14:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.747 14:25:04 -- common/autotest_common.sh@10 -- # set +x 00:20:22.747 [2024-04-26 14:25:04.100874] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.747 [2024-04-26 14:25:04.101254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.747 [2024-04-26 14:25:04.101387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.747 [2024-04-26 14:25:04.101413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.747 [2024-04-26 14:25:04.101430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.747 [2024-04-26 14:25:04.101702] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.747 [2024-04-26 14:25:04.101969] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.747 [2024-04-26 14:25:04.101990] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.747 [2024-04-26 14:25:04.102006] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.747 [2024-04-26 14:25:04.106015] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.747 [2024-04-26 14:25:04.115317] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.747 [2024-04-26 14:25:04.115707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.747 [2024-04-26 14:25:04.115822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.747 [2024-04-26 14:25:04.115848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.747 [2024-04-26 14:25:04.115864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.747 [2024-04-26 14:25:04.116125] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.747 [2024-04-26 14:25:04.116388] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.747 [2024-04-26 14:25:04.116409] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.747 [2024-04-26 14:25:04.116424] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.747 [2024-04-26 14:25:04.120437] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.747 [2024-04-26 14:25:04.129778] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.747 [2024-04-26 14:25:04.130311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.747 [2024-04-26 14:25:04.130455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.747 [2024-04-26 14:25:04.130482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.747 [2024-04-26 14:25:04.130501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.747 [2024-04-26 14:25:04.130783] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.747 [2024-04-26 14:25:04.131053] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.747 [2024-04-26 14:25:04.131075] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.747 [2024-04-26 14:25:04.131093] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.747 [2024-04-26 14:25:04.135146] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.747 Malloc0 00:20:22.747 14:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.747 14:25:04 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:22.747 14:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.747 14:25:04 -- common/autotest_common.sh@10 -- # set +x 00:20:22.747 [2024-04-26 14:25:04.144153] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.747 [2024-04-26 14:25:04.144653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.747 [2024-04-26 14:25:04.144789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.747 [2024-04-26 14:25:04.144816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14a0210 with addr=10.0.0.2, port=4420 00:20:22.747 [2024-04-26 14:25:04.144834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0210 is same with the state(5) to be set 00:20:22.747 14:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.747 14:25:04 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:22.747 [2024-04-26 14:25:04.145101] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0210 (9): Bad file descriptor 00:20:22.747 14:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.747 14:25:04 -- common/autotest_common.sh@10 -- # set +x 00:20:22.747 [2024-04-26 14:25:04.145367] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.747 [2024-04-26 14:25:04.145389] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.747 [2024-04-26 14:25:04.145405] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.747 [2024-04-26 14:25:04.149427] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.747 14:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.747 14:25:04 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:22.747 14:25:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.747 14:25:04 -- common/autotest_common.sh@10 -- # set +x 00:20:22.747 [2024-04-26 14:25:04.156818] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.747 [2024-04-26 14:25:04.158638] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.747 14:25:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.747 14:25:04 -- host/bdevperf.sh@38 -- # wait 3210591 00:20:22.747 [2024-04-26 14:25:04.237205] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:32.714 00:20:32.714 Latency(us) 00:20:32.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.714 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:32.714 Verification LBA range: start 0x0 length 0x4000 00:20:32.714 Nvme1n1 : 15.00 5731.75 22.39 7501.68 0.00 9643.42 649.29 17282.09 00:20:32.714 =================================================================================================================== 00:20:32.714 Total : 5731.75 22.39 7501.68 0.00 9643.42 649.29 17282.09 00:20:32.714 14:25:13 -- host/bdevperf.sh@39 -- # sync 00:20:32.714 14:25:13 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:32.714 14:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:32.714 14:25:13 -- common/autotest_common.sh@10 -- # set +x 00:20:32.714 14:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:32.714 14:25:13 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:20:32.714 14:25:13 -- host/bdevperf.sh@44 -- # nvmftestfini 00:20:32.714 14:25:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:32.714 14:25:13 -- nvmf/common.sh@117 -- # sync 00:20:32.714 14:25:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:32.714 14:25:13 -- nvmf/common.sh@120 -- # set +e 00:20:32.714 14:25:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:32.714 14:25:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:32.714 rmmod nvme_tcp 00:20:32.714 rmmod nvme_fabrics 00:20:32.714 rmmod nvme_keyring 00:20:32.714 14:25:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:32.714 14:25:13 -- nvmf/common.sh@124 -- # set -e 00:20:32.714 14:25:13 -- nvmf/common.sh@125 -- # return 0 00:20:32.714 14:25:13 -- nvmf/common.sh@478 -- # '[' -n 3211102 ']' 00:20:32.714 14:25:13 -- nvmf/common.sh@479 -- # killprocess 3211102 00:20:32.714 14:25:13 -- common/autotest_common.sh@936 -- # '[' -z 3211102 ']' 00:20:32.714 14:25:13 -- common/autotest_common.sh@940 -- # kill -0 3211102 00:20:32.714 14:25:13 -- common/autotest_common.sh@941 -- # uname 00:20:32.714 14:25:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:32.714 14:25:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3211102 00:20:32.714 14:25:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:32.714 14:25:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:32.714 14:25:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3211102' 00:20:32.714 killing process with pid 3211102 00:20:32.714 14:25:13 -- common/autotest_common.sh@955 -- # kill 3211102 00:20:32.714 14:25:13 -- common/autotest_common.sh@960 -- # wait 3211102 00:20:32.714 14:25:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:32.714 14:25:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:32.714 14:25:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:32.714 14:25:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:32.714 14:25:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:32.714 14:25:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.714 14:25:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.714 14:25:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.640 14:25:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:34.640 00:20:34.640 real 0m21.974s 00:20:34.640 user 0m59.719s 00:20:34.640 sys 0m3.800s 00:20:34.640 14:25:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:34.640 14:25:15 -- common/autotest_common.sh@10 -- # set +x 00:20:34.640 ************************************ 00:20:34.640 END TEST nvmf_bdevperf 00:20:34.640 ************************************ 00:20:34.640 14:25:15 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:20:34.640 14:25:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:34.640 14:25:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:34.640 14:25:15 -- common/autotest_common.sh@10 -- # set +x 00:20:34.640 ************************************ 00:20:34.640 START TEST nvmf_target_disconnect 00:20:34.640 ************************************ 00:20:34.640 14:25:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:20:34.640 * Looking for test storage... 00:20:34.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:34.640 14:25:15 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:34.640 14:25:15 -- nvmf/common.sh@7 -- # uname -s 00:20:34.640 14:25:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.640 14:25:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.640 14:25:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.640 14:25:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.640 14:25:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.640 14:25:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.640 14:25:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.640 14:25:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.640 14:25:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.640 14:25:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.640 14:25:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:34.640 14:25:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:34.640 14:25:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.640 14:25:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.640 14:25:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:34.640 14:25:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.640 14:25:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:34.640 14:25:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.640 14:25:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.640 14:25:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.640 14:25:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.640 14:25:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.640 14:25:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.640 14:25:15 -- paths/export.sh@5 -- # export PATH 00:20:34.640 14:25:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.640 14:25:15 -- nvmf/common.sh@47 -- # : 0 00:20:34.640 14:25:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:34.640 14:25:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:34.640 14:25:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.640 14:25:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.640 14:25:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.640 14:25:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:34.640 14:25:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:34.640 14:25:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:34.640 14:25:15 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:34.640 14:25:15 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:20:34.640 14:25:15 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:20:34.640 14:25:15 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:20:34.640 14:25:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:34.640 14:25:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.640 14:25:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:34.640 14:25:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:34.640 14:25:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:34.640 14:25:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.640 14:25:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.640 14:25:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.640 14:25:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:34.640 14:25:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:34.640 14:25:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:34.640 14:25:15 -- common/autotest_common.sh@10 -- # set +x 00:20:36.019 14:25:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:36.019 14:25:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:36.019 14:25:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:36.019 14:25:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:36.019 14:25:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:36.019 14:25:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:36.019 14:25:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:36.019 14:25:17 -- nvmf/common.sh@295 -- # net_devs=() 00:20:36.019 14:25:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:36.019 14:25:17 -- nvmf/common.sh@296 -- # e810=() 00:20:36.019 14:25:17 -- nvmf/common.sh@296 -- # local -ga e810 00:20:36.019 14:25:17 -- nvmf/common.sh@297 -- # x722=() 00:20:36.020 14:25:17 -- nvmf/common.sh@297 -- # local -ga x722 00:20:36.020 14:25:17 -- nvmf/common.sh@298 -- # mlx=() 00:20:36.020 14:25:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:36.020 14:25:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.020 14:25:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.020 14:25:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.020 14:25:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.020 14:25:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.020 14:25:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.020 14:25:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.020 14:25:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.279 14:25:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.279 14:25:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.279 14:25:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.279 14:25:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:36.279 14:25:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:36.279 14:25:17 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:36.279 14:25:17 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:36.279 14:25:17 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:36.279 14:25:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:36.279 14:25:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.279 14:25:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:36.279 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:36.279 14:25:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.279 14:25:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.279 14:25:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.279 14:25:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.279 14:25:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.279 14:25:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.279 14:25:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:36.279 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:36.279 14:25:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.279 14:25:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.279 14:25:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.279 14:25:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.279 14:25:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.279 14:25:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:36.279 14:25:17 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:36.279 14:25:17 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:36.279 14:25:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.279 14:25:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.279 14:25:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:36.279 14:25:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.279 14:25:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:36.279 Found net devices under 0000:08:00.0: cvl_0_0 00:20:36.279 14:25:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.279 14:25:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.279 14:25:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.279 14:25:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:36.279 14:25:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.279 14:25:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:36.279 Found net devices under 0000:08:00.1: cvl_0_1 00:20:36.279 14:25:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.279 14:25:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:36.279 14:25:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:36.279 14:25:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:36.279 14:25:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:36.279 14:25:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:36.279 14:25:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.279 14:25:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.279 14:25:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.279 14:25:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:36.279 14:25:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:36.279 14:25:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:36.279 14:25:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:36.279 14:25:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:36.279 14:25:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.279 14:25:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:36.279 14:25:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:36.280 14:25:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:36.280 14:25:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:36.280 14:25:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:36.280 14:25:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:36.280 14:25:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:36.280 14:25:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:36.280 14:25:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:36.280 14:25:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:36.280 14:25:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:36.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:20:36.280 00:20:36.280 --- 10.0.0.2 ping statistics --- 00:20:36.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.280 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:20:36.280 14:25:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:36.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:20:36.280 00:20:36.280 --- 10.0.0.1 ping statistics --- 00:20:36.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.280 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:20:36.280 14:25:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.280 14:25:17 -- nvmf/common.sh@411 -- # return 0 00:20:36.280 14:25:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:36.280 14:25:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.280 14:25:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:36.280 14:25:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:36.280 14:25:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.280 14:25:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:36.280 14:25:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:36.280 14:25:17 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:20:36.280 14:25:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:36.280 14:25:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:36.280 14:25:17 -- common/autotest_common.sh@10 -- # set +x 00:20:36.280 ************************************ 00:20:36.280 START TEST nvmf_target_disconnect_tc1 00:20:36.280 ************************************ 00:20:36.280 14:25:17 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:20:36.280 14:25:17 -- host/target_disconnect.sh@32 -- # set +e 00:20:36.280 14:25:17 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:36.538 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.538 [2024-04-26 14:25:17.908992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.538 [2024-04-26 14:25:17.909261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.538 [2024-04-26 14:25:17.909290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2076ec0 with addr=10.0.0.2, port=4420 00:20:36.538 [2024-04-26 14:25:17.909334] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:36.538 [2024-04-26 14:25:17.909362] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:36.538 [2024-04-26 14:25:17.909378] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:20:36.538 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:20:36.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:20:36.538 Initializing NVMe Controllers 00:20:36.538 14:25:17 -- host/target_disconnect.sh@33 -- # trap - ERR 00:20:36.538 14:25:17 -- host/target_disconnect.sh@33 -- # print_backtrace 00:20:36.538 14:25:17 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:20:36.538 14:25:17 -- common/autotest_common.sh@1139 -- # return 0 00:20:36.538 14:25:17 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:20:36.538 14:25:17 -- host/target_disconnect.sh@41 -- # set -e 00:20:36.538 00:20:36.538 real 0m0.088s 00:20:36.538 user 0m0.039s 00:20:36.538 sys 0m0.047s 00:20:36.538 14:25:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:36.538 14:25:17 -- common/autotest_common.sh@10 -- # set +x 00:20:36.538 ************************************ 00:20:36.538 END TEST nvmf_target_disconnect_tc1 00:20:36.538 ************************************ 00:20:36.538 14:25:17 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:20:36.538 14:25:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:36.538 14:25:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:36.538 14:25:17 -- common/autotest_common.sh@10 -- # set +x 00:20:36.538 ************************************ 00:20:36.538 START TEST nvmf_target_disconnect_tc2 00:20:36.538 ************************************ 00:20:36.538 14:25:18 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:20:36.538 14:25:18 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:20:36.538 14:25:18 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:20:36.538 14:25:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:36.538 14:25:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:36.538 14:25:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.538 14:25:18 -- nvmf/common.sh@470 -- # nvmfpid=3213552 00:20:36.538 14:25:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:20:36.538 14:25:18 -- nvmf/common.sh@471 -- # waitforlisten 3213552 00:20:36.539 14:25:18 -- common/autotest_common.sh@817 -- # '[' -z 3213552 ']' 00:20:36.539 14:25:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.539 14:25:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:36.539 14:25:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.539 14:25:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:36.539 14:25:18 -- common/autotest_common.sh@10 -- # set +x 00:20:36.797 [2024-04-26 14:25:18.110912] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:20:36.797 [2024-04-26 14:25:18.110993] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.797 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.797 [2024-04-26 14:25:18.175783] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:36.797 [2024-04-26 14:25:18.292297] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.797 [2024-04-26 14:25:18.292355] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.797 [2024-04-26 14:25:18.292370] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.797 [2024-04-26 14:25:18.292384] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.797 [2024-04-26 14:25:18.292396] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.797 [2024-04-26 14:25:18.292507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:36.797 [2024-04-26 14:25:18.293052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:36.797 [2024-04-26 14:25:18.293145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:20:36.797 [2024-04-26 14:25:18.293307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:37.056 14:25:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:37.056 14:25:18 -- common/autotest_common.sh@850 -- # return 0 00:20:37.056 14:25:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:37.056 14:25:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:37.056 14:25:18 -- common/autotest_common.sh@10 -- # set +x 00:20:37.056 14:25:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.056 14:25:18 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:37.056 14:25:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.056 14:25:18 -- common/autotest_common.sh@10 -- # set +x 00:20:37.056 Malloc0 00:20:37.056 14:25:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.056 14:25:18 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:37.056 14:25:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.056 14:25:18 -- common/autotest_common.sh@10 -- # set +x 00:20:37.056 [2024-04-26 14:25:18.470192] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.056 14:25:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.056 14:25:18 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:37.056 14:25:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.056 14:25:18 -- common/autotest_common.sh@10 -- # set +x 00:20:37.056 14:25:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.056 14:25:18 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:37.056 14:25:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.056 14:25:18 -- common/autotest_common.sh@10 -- # set +x 00:20:37.056 14:25:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.056 14:25:18 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.056 14:25:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.056 14:25:18 -- common/autotest_common.sh@10 -- # set +x 00:20:37.056 [2024-04-26 14:25:18.498406] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.056 14:25:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.056 14:25:18 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:37.056 14:25:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.056 14:25:18 -- common/autotest_common.sh@10 -- # set +x 00:20:37.056 14:25:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.056 14:25:18 -- host/target_disconnect.sh@50 -- # reconnectpid=3213581 00:20:37.056 14:25:18 -- host/target_disconnect.sh@52 -- # sleep 2 00:20:37.056 14:25:18 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:37.056 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.960 14:25:20 -- host/target_disconnect.sh@53 -- # kill -9 3213552 00:20:38.960 14:25:20 -- host/target_disconnect.sh@55 -- # sleep 2 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 [2024-04-26 14:25:20.522074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 [2024-04-26 14:25:20.522551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Write completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 Read completed with error (sct=0, sc=8) 00:20:38.960 starting I/O failed 00:20:38.960 [2024-04-26 14:25:20.522940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:38.960 [2024-04-26 14:25:20.523257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.960 [2024-04-26 14:25:20.523478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.960 [2024-04-26 14:25:20.523524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:38.960 qpair failed and we were unable to recover it. 00:20:38.960 [2024-04-26 14:25:20.523754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.960 [2024-04-26 14:25:20.524022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.960 [2024-04-26 14:25:20.524078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:38.960 qpair failed and we were unable to recover it. 00:20:38.960 [2024-04-26 14:25:20.524229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.524389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.524472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:38.961 qpair failed and we were unable to recover it. 00:20:38.961 [2024-04-26 14:25:20.524626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.524825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.524888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:38.961 qpair failed and we were unable to recover it. 00:20:38.961 [2024-04-26 14:25:20.525069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.525219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.525245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:38.961 qpair failed and we were unable to recover it. 00:20:38.961 [2024-04-26 14:25:20.525392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.525527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.525570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:38.961 qpair failed and we were unable to recover it. 00:20:38.961 [2024-04-26 14:25:20.525703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.525937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.525985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:38.961 qpair failed and we were unable to recover it. 00:20:38.961 [2024-04-26 14:25:20.526187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.526334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.526378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:38.961 qpair failed and we were unable to recover it. 00:20:38.961 [2024-04-26 14:25:20.526508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.526641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.526690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:38.961 qpair failed and we were unable to recover it. 00:20:38.961 [2024-04-26 14:25:20.526819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.527008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.527066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:38.961 qpair failed and we were unable to recover it. 00:20:38.961 [2024-04-26 14:25:20.527176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.527278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.527303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:38.961 qpair failed and we were unable to recover it. 00:20:38.961 [2024-04-26 14:25:20.527412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.527539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.527564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:38.961 qpair failed and we were unable to recover it. 00:20:38.961 [2024-04-26 14:25:20.527669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.527828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.527853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:38.961 qpair failed and we were unable to recover it. 00:20:38.961 [2024-04-26 14:25:20.528037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.528164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.528191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:38.961 qpair failed and we were unable to recover it. 00:20:38.961 [2024-04-26 14:25:20.528312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.528415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:38.961 [2024-04-26 14:25:20.528441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:38.961 qpair failed and we were unable to recover it. 00:20:38.961 [2024-04-26 14:25:20.528557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.234 [2024-04-26 14:25:20.528662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.234 [2024-04-26 14:25:20.528688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.234 qpair failed and we were unable to recover it. 00:20:39.234 [2024-04-26 14:25:20.528881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.234 [2024-04-26 14:25:20.529017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.234 [2024-04-26 14:25:20.529044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.234 qpair failed and we were unable to recover it. 00:20:39.234 [2024-04-26 14:25:20.529178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.234 [2024-04-26 14:25:20.529368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.234 [2024-04-26 14:25:20.529400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.234 qpair failed and we were unable to recover it. 00:20:39.234 [2024-04-26 14:25:20.529559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.234 [2024-04-26 14:25:20.529666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.234 [2024-04-26 14:25:20.529696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.234 qpair failed and we were unable to recover it. 00:20:39.234 [2024-04-26 14:25:20.529836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.234 [2024-04-26 14:25:20.530027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.234 [2024-04-26 14:25:20.530057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.234 qpair failed and we were unable to recover it. 00:20:39.234 [2024-04-26 14:25:20.530196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.234 [2024-04-26 14:25:20.530374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.234 [2024-04-26 14:25:20.530400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.234 qpair failed and we were unable to recover it. 00:20:39.234 [2024-04-26 14:25:20.530601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.234 [2024-04-26 14:25:20.530816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.234 [2024-04-26 14:25:20.530865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.234 qpair failed and we were unable to recover it. 00:20:39.234 [2024-04-26 14:25:20.531103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.234 [2024-04-26 14:25:20.531332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.234 [2024-04-26 14:25:20.531381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.234 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.531597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.531772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.531814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.531946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.532144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.532194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.532324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.532484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.532509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.532660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.532820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.532845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.532983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.533085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.533110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.533226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.533328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.533353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.533457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.533572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.533598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.533742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.533901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.533926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.534113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.534284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.534346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.534485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.534700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.534725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.534859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.534965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.534991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.535148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.535273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.535299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.535427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.535522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.535548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.535670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.535793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.535818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.535959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.536162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.536217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.536386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.536500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.536526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.536695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.536799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.536826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.537007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.537169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.537194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.537355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.537497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.537524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.537644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.537804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.537831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.537949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.538120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.538177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.538333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.538528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.538580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.538698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.538855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.538903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.539012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.539195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.539225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.539361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.539542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.539593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.539766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.539946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.540004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.540179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.540342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.540398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.540526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.540672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.540717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.540844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.540972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.540997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.541173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.541310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.541364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.541522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.541690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.541717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.541880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.542001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.542030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.542204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.542317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.542375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.542526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.542680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.542737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.235 [2024-04-26 14:25:20.542839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.542945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.235 [2024-04-26 14:25:20.542974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.235 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.543093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.543203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.543229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.543346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.543451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.543475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.543569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.543695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.543753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.543847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.544007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.544064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.544202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.544338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.544364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.544469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.544582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.544609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.544749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.544952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.545010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.545169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.545362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.545423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.545542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.545652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.545684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.545851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.545995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.546051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.546144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.546269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.546331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Write completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Write completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Write completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Write completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Write completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Write completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Write completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Write completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Write completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Write completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 Read completed with error (sct=0, sc=8) 00:20:39.236 starting I/O failed 00:20:39.236 [2024-04-26 14:25:20.546702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:20:39.236 [2024-04-26 14:25:20.546784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.546916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.546948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.547158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.547321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.547351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.547522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.547651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.547679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.547826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.547953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.547980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.548142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.548289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.548316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.548494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.548629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.548662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.548764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.548879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.548905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.549046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.549233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.549290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.549452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.549582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.549609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.549791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.549955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.549982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.550154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.550252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.550278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.550491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.550610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.550641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.550799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.550904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.550931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.236 qpair failed and we were unable to recover it. 00:20:39.236 [2024-04-26 14:25:20.551100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.236 [2024-04-26 14:25:20.551235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.551260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.551457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.551580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.551668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.551770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.551876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.551902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.552086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.552258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.552283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.552431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.552615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.552677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.552899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.553090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.553141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.553301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.553411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.553438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.553553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.553720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.553770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.553885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.554070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.554123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.554326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.554468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.554523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.554622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.554726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.554753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.554918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.555117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.555166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.555371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.555666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.555698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.555919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.556077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.556137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.556375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.556589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.556619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.556754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.556953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.556983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.557119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.557295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.557351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.557517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.557740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.557791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.557968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.558098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.558133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.558340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.558455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.558485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.558663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.558881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.558934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.559054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.559213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.559270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.559369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.559487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.559545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.559742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.559929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.559957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.560058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.560176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.560204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.560337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.560502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.560554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.560652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.560805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.560857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.560966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.561092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.561117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.561298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.561437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.561498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.561609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.561825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.561851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.561961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.562051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.562075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.562173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.562289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.562347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.237 qpair failed and we were unable to recover it. 00:20:39.237 [2024-04-26 14:25:20.562471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.237 [2024-04-26 14:25:20.562586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.562611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.562782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.562960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.563011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.563183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.563348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.563373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.563483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.563581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.563607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.563790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.563949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.564000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.564093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.564282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.564335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.564469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.564683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.564708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.564862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.565081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.565134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.565244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.565375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.565428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.565524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.565673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.565723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.565821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.565989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.566051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.566204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.566342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.566369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.566545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.566720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.566773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.566943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.567128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.567184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.567321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.567516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.567566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.567686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.567878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.567925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.568088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.568279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.568303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.568405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.568523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.568548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.568658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.568809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.568860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.568997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.569149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.569209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.569369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.569513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.569576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.569673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.569863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.569919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.570051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.570218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.238 [2024-04-26 14:25:20.570272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.238 qpair failed and we were unable to recover it. 00:20:39.238 [2024-04-26 14:25:20.570412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.570533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.570557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.570652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.570786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.570836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.570966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.571108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.571159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.571296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.571413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.571437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.571546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.571684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.571733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.571868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.572027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.572080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.572189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.572322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.572348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.572452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.572543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.572574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.572684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.572817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.572841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.572933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.573057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.573082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.573172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.573261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.573287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.573386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.573509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.573535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.573666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.573779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.573804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.573923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.574082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.574138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.574271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.574446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.574491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.574625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.574834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.574859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.574992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.575155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.575202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.575351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.575482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.575536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.575689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.575867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.575894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.576014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.576180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.576230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.576351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.576471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.576496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.576664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.576821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.576874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.576966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.577066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.577093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.577201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.577368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.577421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.577551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.577643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.577670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.577794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.577940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.578004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.578176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.578261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.578286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.578425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.578545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.578571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.578671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.578762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.239 [2024-04-26 14:25:20.578787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.239 qpair failed and we were unable to recover it. 00:20:39.239 [2024-04-26 14:25:20.578879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.579011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.579062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.579169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.579347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.579398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.579490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.579591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.579658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.579792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.579914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.579940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.580120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.580209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.580235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.580339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.580437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.580464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.580561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.580733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.580784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.580968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.581127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.581152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.581288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.581460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.581507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.581669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.581808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.581861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.582028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.582184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.582210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.582334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.582550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.582601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.582784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.582896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.582921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.583108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.583225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.583250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.583435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.583529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.583554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.583659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.583781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.583835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.583930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.584117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.584164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.584272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.584405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.584457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.584579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.584725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.584751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.584883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.584995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.585051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.585144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.585270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.585322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.585457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.585613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.585669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.585806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.585970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.585995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.586169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.586259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.586284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.586402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.586551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.586576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.586679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.586779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.586804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.586898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.587024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.587079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.587209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.587410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.587435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.587534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.587629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.587663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.240 qpair failed and we were unable to recover it. 00:20:39.240 [2024-04-26 14:25:20.587803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.587975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.240 [2024-04-26 14:25:20.588023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.588154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.588293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.588349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.588477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.588594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.588619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.588821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.588914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.588939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.589047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.589242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.589291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.589405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.589530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.589554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.589648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.589774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.589800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.589899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.589994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.590018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.590155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.590264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.590289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.590422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.590535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.590560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.590668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.590835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.590891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.591012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.591218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.591269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.591425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.591541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.591567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.591665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.591800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.591858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.591954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.592131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.592176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.592310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.592423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.592449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.592543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.592660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.592686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.592855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.592971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.592997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.593191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.593373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.593428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.593519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.593644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.593697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.593822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.593990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.594048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.594166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.594297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.594358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.594458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.594582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.594608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.594725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.594953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.594982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.595102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.595241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.595297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.595421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.595557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.595618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.595817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.595958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.596014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.596185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.596276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.596302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.596427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.596536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.596594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.596736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.596882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.596935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.241 [2024-04-26 14:25:20.597190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.597332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.241 [2024-04-26 14:25:20.597364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.241 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.597515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.597728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.597781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.597904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.598064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.598117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.598251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.598387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.598412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.598571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.598667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.598694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.598819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.598997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.599058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.599211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.599420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.599470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.599600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.599770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.599822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.600030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.600188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.600239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.600358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.600570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.600620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.600830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.601014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.601068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.601256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.601424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.601471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.601592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.601782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.601846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.602029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.602149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.602201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.602314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.602496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.602558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.602734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.602856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.602881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.603014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.603229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.603255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.603449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.603570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.603595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.603726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.603844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.603904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.604055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.604251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.604302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.604498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.604680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.604706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.604845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.605046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.605075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.605170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.605303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.605356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.605541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.605669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.605695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.605917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.606036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.606061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.606172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.606293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.606318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.606409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.606530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.606555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.606682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.606776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.606801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.606959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.607174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.607235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.607359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.607566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.607618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.607889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.608113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.242 [2024-04-26 14:25:20.608162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.242 qpair failed and we were unable to recover it. 00:20:39.242 [2024-04-26 14:25:20.608256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.608418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.608470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.608598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.608716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.608770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.608955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.609090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.609148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.609396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.609609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.609640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.609768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.609953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.610010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.610109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.610313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.610357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.610596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.610842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.610895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.611077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.611241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.611266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.611439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.611689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.611716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.611834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.611957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.611983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.612174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.612340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.612390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.612578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.612733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.612787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.612945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.613039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.613064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.613200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.613394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.613419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.613537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.613656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.613682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.613810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.613978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.614025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.614144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.614267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.614292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.614382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.614556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.614608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.614801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.615014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.615064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.615237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.615334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.615359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.615481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.615599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.615624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.615865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.616159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.616212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.616344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.616562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.616589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.243 [2024-04-26 14:25:20.616864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.616982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.243 [2024-04-26 14:25:20.617008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.243 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.617129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.617243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.617290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.617429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.617541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.617566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.617749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.617957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.618006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.618098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.618189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.618213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.618305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.618397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.618423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.618519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.618653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.618684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.618844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.619079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.619129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.619320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.619522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.619573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.619756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.619967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.620019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.620236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.620429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.620477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.620594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.620762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.620787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.620933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.621080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.621160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.621347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.621443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.621467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.621558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.621685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.621710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.621882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.622000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.622026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.622157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.622287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.622317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.622566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.622692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.622718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.622862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.622976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.623034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.623155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.623360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.623408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.623503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.623598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.623623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.623726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.623856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.623883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.623982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.624162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.624209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.624421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.624548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.624605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.624802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.624968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.625019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.625200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.625313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.625365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.625557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.625796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.625852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.626010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.626230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.626283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.244 [2024-04-26 14:25:20.626419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.626543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.244 [2024-04-26 14:25:20.626569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.244 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.626724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.626896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.626923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.627048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.627180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.627235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.627419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.627614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.627647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.627757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.627934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.627981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.628124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.628237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.628263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.628388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.628494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.628553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.628646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.628769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.628793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.628887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.628983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.629010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.629187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.629283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.629310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.629473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.629581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.629605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.629707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.629864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.629916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.630062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.630251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.630276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.630436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.630557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.630584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.630802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.630971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.631022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.631186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.631333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.631384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.631551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.631686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.631713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.631915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.632009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.632033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.632126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.632294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.632343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.632445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.632538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.632564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.632754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.632922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.632972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.633137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.633258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.633284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.633394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.633532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.633558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.633663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.633822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.633847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.634007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.634112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.634137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.634229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.634321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.634347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.634446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.634539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.634564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.634660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.634856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.634912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.635022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.635154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.635202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.635381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.635580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.635637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.245 qpair failed and we were unable to recover it. 00:20:39.245 [2024-04-26 14:25:20.635731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.635905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.245 [2024-04-26 14:25:20.635956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.636121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.636245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.636325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.636472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.636651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.636679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.636853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.637046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.637097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.637245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.637411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.637459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.637596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.637761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.637811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.637905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.638022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.638070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.638245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.638459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.638484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.638582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.638714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.638765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.638921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.639046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.639094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.639293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.639460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.639511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.639679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.639832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.639895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.639995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.640148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.640210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.640380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.640576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.640603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.640729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.640821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.640845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.641002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.641171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.641227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.641426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.641552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.641577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.641689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.641826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.641852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.641986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.642138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.642193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.642308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.642429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.642453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.642560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.642670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.642696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.642866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.643037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.643087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.643206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.643374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.643426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.643518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.643680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.643706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.643911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.644075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.644132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.644256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.644433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.644482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.644575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.644692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.644741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.644879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.645087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.645145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.246 qpair failed and we were unable to recover it. 00:20:39.246 [2024-04-26 14:25:20.645276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.246 [2024-04-26 14:25:20.645404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.645429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.645548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.645650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.645676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.645807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.645943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.645968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.646105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.646271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.646328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.646469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.646586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.646612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.646759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.646850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.646876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.646976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.647165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.647213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.647377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.647499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.647524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.647684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.647823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.647879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.648010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.648230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.648255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.648375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.648522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.648570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.648736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.648920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.648972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.649104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.649266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.649322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.649507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.649674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.649726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.649882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.650080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.650106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.650255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.650381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.650407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.650515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.650611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.650642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.650762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.650913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.650939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.651078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.651183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.651208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.651309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.651442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.651499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.651651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.651838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.651888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.652012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.652106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.652133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.652279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.652400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.652452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.652576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.652723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.652779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.247 [2024-04-26 14:25:20.652947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.653107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.247 [2024-04-26 14:25:20.653157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.247 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.653256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.653399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.653451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.653579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.653690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.653747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.653898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.654036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.654093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.654185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.654345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.654397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.654502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.654651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.654679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.654817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.654955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.655009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.655146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.655281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.655306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.655448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.655566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.655593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.655727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.655884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.655936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.656065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.656220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.656285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.656408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.656524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.656590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.656765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.656906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.656932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.657062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.657195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.657242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.657385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.657511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.657536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.657636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.657806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.657850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.657947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.658063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.658116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.658221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.658396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.658447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.658553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.658660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.658687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.658828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.658986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.659043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.659185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.659321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.659376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.659536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.659650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.659676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.659834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.659972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.660022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.660205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.660361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.660414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.660505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.660621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.660678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.660833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.661004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.661051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.661185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.661400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.661447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.661576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.661718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.661770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.248 qpair failed and we were unable to recover it. 00:20:39.248 [2024-04-26 14:25:20.661884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.248 [2024-04-26 14:25:20.662050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.662092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.662216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.662342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.662368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.662483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.662667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.662692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.662816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.662908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.662933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.663069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.663250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.663310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.663455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.663595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.663619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.663791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.663925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.663977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.664088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.664270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.664322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.664467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.664678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.664703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.664829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.664974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.665000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.665157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.665289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.665345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.665436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.665544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.665569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.665672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.665806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.665862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.666002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.666168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.666195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.666298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.666458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.666516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.666688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.666852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.666910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.667093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.667190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.667215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.667418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.667544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.667569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.667697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.667902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.667951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.668101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.668214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.668243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.668385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.668533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.668558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.668652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.668777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.668803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.668956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.669083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.669107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.669204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.669291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.669317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.669415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.669513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.669539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.669726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.669830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.669856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.670019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.670196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.670246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.670366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.670487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.670514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.249 [2024-04-26 14:25:20.670688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.670798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.249 [2024-04-26 14:25:20.670824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.249 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.670982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.671147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.671210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.671340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.671482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.671506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.671601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.671698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.671724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.671894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.672025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.672076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.672205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.672367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.672420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.672537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.672682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.672709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.672883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.673042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.673097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.673234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.673392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.673441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.673547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.673653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.673679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.673864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.673986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.674011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.674109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.674239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.674297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.674491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.674609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.674648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.674782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.674869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.674894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.675007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.675130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.675156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.675300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.675481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.675538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.675667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.675884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.675938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.676057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.676257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.676283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.676404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.676553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.676577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.676665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.676854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.676880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.677011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.677158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.677210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.677386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.677503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.677592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.677695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.677817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.677842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.677957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.678077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.678104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.678281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.678395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.678419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.678507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.678618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.678680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.678796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.678948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.679000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.679114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.679274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.679322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.250 [2024-04-26 14:25:20.679452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.679649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.250 [2024-04-26 14:25:20.679705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.250 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.679832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.679976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.680027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.680145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.680351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.680377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.680491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.680659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.680715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.680842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.681076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.681124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.681226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.681409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.681435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.681557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.681740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.681791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.681944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.682065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.682131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.682242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.682430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.682479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.682571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.682698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.682723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.682849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.683000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.683062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.683308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.683480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.683504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.683643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.683762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.683830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.684006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.684135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.684160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.684289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.684416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.684442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.684544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.684691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.684743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.684865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.685034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.685085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.685189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.685283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.685309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.685459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.685548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.685574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.685679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.685772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.685798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.685900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.686061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.686110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.686215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.686340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.686366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.686457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.686552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.686579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.686709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.686874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.686940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.687034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.687170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.687222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.687314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.687445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.687496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.687720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.687839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.687900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.687996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.688125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.688166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.251 [2024-04-26 14:25:20.688340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.688451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.251 [2024-04-26 14:25:20.688476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.251 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.688587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.688735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.688785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.688930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.689042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.689105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.689208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.689327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.689381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.689532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.689696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.689750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.689903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.690018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.690064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.690194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.690320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.690345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.690461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.690629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.690661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.690772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.690909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.690968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.691078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.691202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.691226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.691393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.691542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.691568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.691738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.691874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.691934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.692058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.692207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.692234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.692353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.692470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.692495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.692624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.692725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.692752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.692875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.693101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.693148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.693294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.693450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.693476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.693573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.693660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.693686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.693809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.693985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.694031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.694151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.694312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.694376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.694468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.694553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.694577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.694689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.694785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.694810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.694973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.695096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.695135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.695258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.695424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.252 [2024-04-26 14:25:20.695451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.252 qpair failed and we were unable to recover it. 00:20:39.252 [2024-04-26 14:25:20.695562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.695668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.695696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.695821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.695933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.695959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.696107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.696322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.696381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.696476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.696573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.696597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.696736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.696849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.696874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.696992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.697101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.697126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.697218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.697309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.697334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.697500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.697600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.697688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.697812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.697953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.697998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.698125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.698270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.698321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.698432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.698562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.698588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.698690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.698781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.698807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.698926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.699040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.699067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.699262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.699453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.699503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.699598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.699734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.699780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.699903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.700080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.700136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.700315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.700438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.700464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.700590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.700746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.700808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.700935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.701110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.701157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.701285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.701464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.701516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.701694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.701812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.701867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.701985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.702158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.702186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.702288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.702393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.702422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.702545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.702661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.702689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.702814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.702902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.702928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.703023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.703237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.703285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.703384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.703506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.703568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.703773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.703921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.703972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.704067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.704259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.704309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.253 [2024-04-26 14:25:20.704408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.704519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.253 [2024-04-26 14:25:20.704574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.253 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.704700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.704892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.704944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.705074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.705272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.705323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.705437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.705588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.705646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.705822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.705943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.705968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.706115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.706300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.706350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.706443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.706565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.706590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.706762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.706969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.706995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.707120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.707260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.707313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.707494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.707709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.707736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.707833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.707966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.708019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.708139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.708330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.708357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.708529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.708646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.708686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.708852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.709038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.709064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.709279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.709413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.709461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.709571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.709661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.709687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.709862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.710010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.710062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.710170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.710262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.710287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.710380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.710483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.710508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.710644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.710790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.710835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.710970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.711062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.711088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.711184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.711318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.711365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.711459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.711546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.711571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.711704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.711868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.711895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.712016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.712174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.712226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.712405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.712555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.712582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.712687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.712811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.712856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.712966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.713118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.713167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.254 [2024-04-26 14:25:20.713328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.713561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.254 [2024-04-26 14:25:20.713607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.254 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.713741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.713889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.713915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.714004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.714136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.714189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.714298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.714449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.714498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.714606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.714797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.714845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.715001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.715221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.715268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.715387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.715555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.715611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.715765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.715950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.715975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.716100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.716283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.716333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.716487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.716652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.716697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.716844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.716996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.717041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.717163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.717317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.717374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.717516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.717641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.717691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.717835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.718004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.718056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.718147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.718251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.718276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.718399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.718551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.718581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.718682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.718782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.718807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.718906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.719051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.719077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.719255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.719381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.719406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.719552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.719643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.719669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.719872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.720072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.720123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.720243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.720402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.720448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.720569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.720661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.720687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.720858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.721019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.721069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.721183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.721394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.721444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.721644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.721776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.721839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.721983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.722232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.722285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.722384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.722514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.722567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.722675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.722813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.722839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.722962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.723073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.723099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.255 qpair failed and we were unable to recover it. 00:20:39.255 [2024-04-26 14:25:20.723285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.723405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.255 [2024-04-26 14:25:20.723432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.723556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.723731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.723786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.723956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.724071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.724096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.724277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.724447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.724474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.724614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.724778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.724831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.724927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.725121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.725151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.725308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.725494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.725545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.725642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.725739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.725764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.725904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.726031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.726058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.726198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.726408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.726434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.726524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.726656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.726703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.726865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.727015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.727061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.727157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.727287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.727339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.727430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.727536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.727562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.727665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.727790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.727829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.727922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.728056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.728112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.728251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.728413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.728465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.728558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.728647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.728673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.728769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.728927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.728985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.729077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.729201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.729254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.729414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.729548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.729573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.729695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.729818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.729845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.729964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.730095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.730152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.730269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.730458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.730511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.730666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.730797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.730822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.730939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.731121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.731147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.731294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.731447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.731473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.731567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.731677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.731703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.731851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.732016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.732067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.732162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.732260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.732285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.256 qpair failed and we were unable to recover it. 00:20:39.256 [2024-04-26 14:25:20.732462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.256 [2024-04-26 14:25:20.732621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.732667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.732782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.732906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.732933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.733040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.733156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.733204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.733379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.733528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.733553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.733679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.733888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.733936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.734026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.734122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.734149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.734252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.734351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.734377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.734501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.734605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.734672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.734812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.735060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.735085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.735181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.735315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.735395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.735488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.735690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.735716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.735836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.736016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.736073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.736197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.736392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.736446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.736540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.736638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.736664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.736793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.736996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.737053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.737175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.737291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.737316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.737468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.737658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.737684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.737806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.737983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.738028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.738184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.738306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.738354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.738466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.738571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.738597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.738727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.738898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.738954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.739070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.739207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.739255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.257 [2024-04-26 14:25:20.739351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.739487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.257 [2024-04-26 14:25:20.739539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.257 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.739671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.739849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.739911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.740021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.740162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.740215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.740354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.740475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.740502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.740679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.740771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.740796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.740952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.741146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.741171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.741296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.741444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.741523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.741682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.741798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.741847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.741989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.742172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.742225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.742340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.742457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.742483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.742624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.742724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.742749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.742868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.743017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.743097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.743233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.743353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.743380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.743470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.743565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.743590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.743695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.743812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.743880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.743996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.744108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.744164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.744284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.744425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.744477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.744574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.744689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.744748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.744903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.745038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.745095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.745223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.745370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.745451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.745566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.745661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.745687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.745854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.746003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.746061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.746182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.746308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.746363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.746457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.746549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.746576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.746737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.746848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.746903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.747032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.747130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.747156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.747247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.747364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.747407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.747517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.747609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.747641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.747742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.747870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.747952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.748085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.748237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.748285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.748444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.748598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.748653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.258 qpair failed and we were unable to recover it. 00:20:39.258 [2024-04-26 14:25:20.748775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.258 [2024-04-26 14:25:20.748889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.748914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.749082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.749287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.749312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.749437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.749557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.749582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.750790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.750893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.750973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.751127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.751283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.751331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.751473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.751682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.751725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.751881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.752046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.752072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.752187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.752303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.752328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.752446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.752558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.752583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.752690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.752818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.752899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.753006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.753139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.753191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.753352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.753507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.753556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.753652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.753740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.753766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.753938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.754054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.754081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.754203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.754353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.754399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.754502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.754605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.754635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.754742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.754843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.754870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.755020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.755122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.755150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.755246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.755347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.755374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.755469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.755562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.755588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.755738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.755832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.755859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.756008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.756106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.756132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.756231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.756332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.756358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.756453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.756583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.756609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.756814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.757014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.757063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.757160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.757277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.757329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.757417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.757513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.757540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.757648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.757850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.757905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.758030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.758144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.758203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.758297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.758417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.758471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.758569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.758659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.758686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.259 qpair failed and we were unable to recover it. 00:20:39.259 [2024-04-26 14:25:20.758781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.758880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.259 [2024-04-26 14:25:20.758906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.759000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.759122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.759148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.759314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.759440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.759466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.759560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.759710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.759763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.760856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.760977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.761038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.761188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.761305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.761359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.761456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.761557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.761584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.761686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.761777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.761802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.761955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.762107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.762154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.762351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.762530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.762577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.762814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.762940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.762979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.763089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.763207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.763232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.763445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.763570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.763596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.763727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.763846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.763895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.763998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.764110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.764166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.764326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.764494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.764543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.764696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.764809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.764869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.764983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.765179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.765229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.765666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.765778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.765807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.765905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.766012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.766040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.766141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.766396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.766422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.766518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.766646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.766673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.766771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.766880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.766908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.767006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.767096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.767122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.767253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.767428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.767453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.767546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.767714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.767767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.767893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.768077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.768138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.768230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.768405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.768455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.260 qpair failed and we were unable to recover it. 00:20:39.260 [2024-04-26 14:25:20.768548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.768670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.260 [2024-04-26 14:25:20.768696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.768795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.769001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.769055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.769150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.769240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.769265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.769979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.770111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.770194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.770366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.770488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.770520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.770697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.770822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.770849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.771016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.771266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.771315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.772119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.772232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.772260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.772432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.772550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.772576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.772699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.772855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.772910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.773088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.773208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.773257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.773374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.773577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.773603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.773824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.773985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.774013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.774109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.774206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.774233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.774368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.774518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.774573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.774697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.774829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.774869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.774971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.775071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.775097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.775191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.775288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.775313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.775418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.775513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.775539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.775651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.775748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.775774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.775919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.776109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.776135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.776332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.776460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.776485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.776588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.776751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.776805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.776929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.777088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.777131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.261 [2024-04-26 14:25:20.777264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.777452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.261 [2024-04-26 14:25:20.777503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.261 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.777620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.777746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.777773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.777875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.777967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.777993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.778091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.778239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.778264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.778432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.778531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.778556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.778652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.778765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.778806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.778903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.778997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.779024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.779126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.779217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.779243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.779350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.779440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.779465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.779590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.779703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.779745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.779866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.780080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.780116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.780320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.780436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.780472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.780626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.780786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.780838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.781010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.781159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.781213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.781365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.781479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.781517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.781673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.781772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.781798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.781900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.782002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.782030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.782139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.782247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.782273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.782368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.782471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.782498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.782647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.782742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.782768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.782871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.782973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.782998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.783146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.783244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.783272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.783394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.783491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.783517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.783612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.783716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.783742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.783869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.783981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.784011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.784126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.784237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.784267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.784386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.784480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.784506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.784611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.784747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.784793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.784918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.785071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.785122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.262 [2024-04-26 14:25:20.785261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.785402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.262 [2024-04-26 14:25:20.785429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.262 qpair failed and we were unable to recover it. 00:20:39.263 [2024-04-26 14:25:20.785530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.785659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.785699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.263 qpair failed and we were unable to recover it. 00:20:39.263 [2024-04-26 14:25:20.785825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.785997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.786040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.263 qpair failed and we were unable to recover it. 00:20:39.263 [2024-04-26 14:25:20.786167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.786317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.786356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.263 qpair failed and we were unable to recover it. 00:20:39.263 [2024-04-26 14:25:20.786499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.786606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.786666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.263 qpair failed and we were unable to recover it. 00:20:39.263 [2024-04-26 14:25:20.786785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.787610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.787652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.263 qpair failed and we were unable to recover it. 00:20:39.263 [2024-04-26 14:25:20.787764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.787913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.787938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.263 qpair failed and we were unable to recover it. 00:20:39.263 [2024-04-26 14:25:20.788038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.788135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.788161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.263 qpair failed and we were unable to recover it. 00:20:39.263 [2024-04-26 14:25:20.788296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.788391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.788416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.263 qpair failed and we were unable to recover it. 00:20:39.263 [2024-04-26 14:25:20.788518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.788612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.788647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.263 qpair failed and we were unable to recover it. 00:20:39.263 [2024-04-26 14:25:20.788778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.788905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.788986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.263 qpair failed and we were unable to recover it. 00:20:39.263 [2024-04-26 14:25:20.789085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.789207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.789253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.263 qpair failed and we were unable to recover it. 00:20:39.263 [2024-04-26 14:25:20.789377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.263 [2024-04-26 14:25:20.789509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.526 [2024-04-26 14:25:20.789546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.526 qpair failed and we were unable to recover it. 00:20:39.526 [2024-04-26 14:25:20.789689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.526 [2024-04-26 14:25:20.789796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.526 [2024-04-26 14:25:20.789828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.526 qpair failed and we were unable to recover it. 00:20:39.526 [2024-04-26 14:25:20.789933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.526 [2024-04-26 14:25:20.790058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.526 [2024-04-26 14:25:20.790102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.526 qpair failed and we were unable to recover it. 00:20:39.526 [2024-04-26 14:25:20.790253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.526 [2024-04-26 14:25:20.790396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.526 [2024-04-26 14:25:20.790432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.526 qpair failed and we were unable to recover it. 00:20:39.526 [2024-04-26 14:25:20.790561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.526 [2024-04-26 14:25:20.790683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.526 [2024-04-26 14:25:20.790720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.526 qpair failed and we were unable to recover it. 00:20:39.526 [2024-04-26 14:25:20.790848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.526 [2024-04-26 14:25:20.790966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.526 [2024-04-26 14:25:20.791001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.791140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.791258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.791296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.791446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.791597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.791661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.791799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.791917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.791948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.792071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.792222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.792253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.792387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.792524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.792554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.792678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.792870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.792897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.793010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.793109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.793135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.793236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.793327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.793352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.793447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.793555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.793583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.793680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.793784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.793811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.793943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.794035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.794061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.794170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.794263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.794290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.794389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.794488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.794516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.794623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.794729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.794755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.794865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.794979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.795014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.795133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.795226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.795257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.795357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.795459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.795487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.795587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.795691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.795719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.796412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.796522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.796553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.796656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.796785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.796811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.796906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.797008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.797036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.797137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.797240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.797266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.797409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.797516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.797542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.797757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.797885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.797938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.798053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.798251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.798278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.798409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.798508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.798534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.527 [2024-04-26 14:25:20.798689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.798806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.527 [2024-04-26 14:25:20.798831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.527 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.798930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.799023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.799048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.799149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.799256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.799282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.799379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.799469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.799494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.799610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.799712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.799737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.799835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.799934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.799958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.800056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.800152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.800178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.800282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.800384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.800409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.800529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.800663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.800697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.800812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.800907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.800932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.801029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.801122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.801147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.801253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.801344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.801368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.801482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.801580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.801606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.801713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.801807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.801831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.801929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.802019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.802045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.802172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.802266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.802291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.802400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.802493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.802517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.802622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.802721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.802746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.802850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.802952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.802976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.803109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.803202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.803225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.803326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.803429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.803453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.803550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.803646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.803674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.803774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.803874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.803900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.804005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.804099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.804126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.804235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.804364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.804393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.804515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.804649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.804703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.804822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.804952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.804993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.805094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.805194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.528 [2024-04-26 14:25:20.805220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.528 qpair failed and we were unable to recover it. 00:20:39.528 [2024-04-26 14:25:20.805341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.805460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.805486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.805613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.805736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.805761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.805867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.805974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.805999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.806104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.806215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.806255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.806374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.806511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.806535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.806637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.806770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.806795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.806893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.806994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.807020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.807124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.807226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.807251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.807381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.807480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.807506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.807609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.807739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.807783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.807908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.808055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.808080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.808176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.808272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.808297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.808397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.808495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.808521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.808623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.808751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.808791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.808913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.809048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.809073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.809226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.809350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.809374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.809474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.809577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.809603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.809731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.809855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.809885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.810000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.810094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.810119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.810257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.810433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.810473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.810572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.810666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.810708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.810929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.811071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.811132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.811269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.811450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.811478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.811582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.811838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.811879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.811995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.812125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.812166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.812315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.812446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.812489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.529 qpair failed and we were unable to recover it. 00:20:39.529 [2024-04-26 14:25:20.812615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.812790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.529 [2024-04-26 14:25:20.812821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.812951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.813084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.813125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.813254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.813366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.813393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.813494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.813589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.813615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.813734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.813831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.813860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.813957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.814098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.814123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.814219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.814313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.814337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.814434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.814531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.814557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.814685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.814774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.814799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.814895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.815015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.815040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.815187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.815320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.815369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.815490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.815687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.815715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.815862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.815972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.815996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.816142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.816257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.816284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.816400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.816508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.816539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.816664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.816822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.816860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.816963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.817060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.817084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.817178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.817273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.817307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.817524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.817639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.817667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.817780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.817899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.817939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.818090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.818326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.818352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.818579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.818676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.818702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.818802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.818926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.818967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.819118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.819239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.819268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.819393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.820093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.820123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.820277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.820416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.820456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.820561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.820660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.820686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.530 qpair failed and we were unable to recover it. 00:20:39.530 [2024-04-26 14:25:20.820807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.821046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.530 [2024-04-26 14:25:20.821071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.531 qpair failed and we were unable to recover it. 00:20:39.531 [2024-04-26 14:25:20.821188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.531 [2024-04-26 14:25:20.821356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.531 [2024-04-26 14:25:20.821400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.531 qpair failed and we were unable to recover it. 00:20:39.531 [2024-04-26 14:25:20.821513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.531 [2024-04-26 14:25:20.821644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.531 [2024-04-26 14:25:20.821687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.531 qpair failed and we were unable to recover it. 00:20:39.531 [2024-04-26 14:25:20.821797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.531 [2024-04-26 14:25:20.821947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.531 [2024-04-26 14:25:20.821988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:39.531 qpair failed and we were unable to recover it. 00:20:39.531 [2024-04-26 14:25:20.822108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124c1c0 is same with the state(5) to be set 00:20:39.531 [2024-04-26 14:25:20.822223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.531 [2024-04-26 14:25:20.822348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.531 [2024-04-26 14:25:20.822399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.531 qpair failed and we were unable to recover it. 00:20:39.531 [2024-04-26 14:25:20.822509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.531 [2024-04-26 14:25:20.822644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.531 [2024-04-26 14:25:20.822673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.531 qpair failed and we were unable to recover it. 00:20:39.531 [2024-04-26 14:25:20.822787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.531 [2024-04-26 14:25:20.822886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.531 [2024-04-26 14:25:20.822915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.531 qpair failed and we were unable to recover it. 00:20:39.531 [2024-04-26 14:25:20.823040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.531 [2024-04-26 14:25:20.823137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.531 [2024-04-26 14:25:20.823168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.531 qpair failed and we were unable to recover it. 00:20:39.531 [2024-04-26 14:25:20.823272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.531 [2024-04-26 14:25:20.823391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.531 [2024-04-26 14:25:20.823417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.531 qpair failed and we were unable to recover it. 00:20:39.531 [2024-04-26 14:25:20.823505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.531 [2024-04-26 14:25:20.823603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.531 [2024-04-26 14:25:20.823629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.531 qpair failed and we were unable to recover it. 00:20:39.531 [2024-04-26 14:25:20.823755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.793 [2024-04-26 14:25:21.302469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.793 [2024-04-26 14:25:21.302537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.793 qpair failed and we were unable to recover it. 00:20:39.793 [2024-04-26 14:25:21.302806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.793 [2024-04-26 14:25:21.303043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.793 [2024-04-26 14:25:21.303082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.793 qpair failed and we were unable to recover it. 00:20:39.793 [2024-04-26 14:25:21.303276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.793 [2024-04-26 14:25:21.303440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.793 [2024-04-26 14:25:21.303484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.793 qpair failed and we were unable to recover it. 00:20:39.793 [2024-04-26 14:25:21.303679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.793 [2024-04-26 14:25:21.303869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.793 [2024-04-26 14:25:21.303923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.793 qpair failed and we were unable to recover it. 00:20:39.793 [2024-04-26 14:25:21.304127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.793 [2024-04-26 14:25:21.304323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.793 [2024-04-26 14:25:21.304349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.793 qpair failed and we were unable to recover it. 00:20:39.793 [2024-04-26 14:25:21.304478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.793 [2024-04-26 14:25:21.304664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.793 [2024-04-26 14:25:21.304705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.793 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.304872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.305045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.305089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.305252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.305437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.305468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.305642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.305833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.305859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.306006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.306187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.306232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.306383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.306529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.306555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.306713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.306908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.306958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.307126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.307383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.307438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.307598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.307770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.307796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.307896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.308113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.308139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.308254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.308407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.308477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.308687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.308883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.308940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.309113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.309300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.309352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.309520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.309716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.309742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.309914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.310088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.310115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.310270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.310468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.310521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.310759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.311025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.311074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.311208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.311480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.311528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.311686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.311884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.311910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.312089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.312263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.312305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.312415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.312599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.312654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.312849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.313080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.313130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.794 qpair failed and we were unable to recover it. 00:20:39.794 [2024-04-26 14:25:21.313227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.313426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.794 [2024-04-26 14:25:21.313477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.313609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.313849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.313898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.314003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.314260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.314316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.314501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.314757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.314822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.315090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.315210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.315235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.315423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.315651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.315682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.315845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.316058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.316107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.316205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.316391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.316416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.316623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.316832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.316858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.317059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.317295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.317357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.317598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.317733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.317759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.317867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.318047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.318096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.318285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.318460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.318485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.318667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.318918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.318966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.319123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.319304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.319329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.319562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.319774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.319802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.319984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.320103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.320158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.320253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.320507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.320556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.320723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.320882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.320940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.321110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.321389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.321439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.321659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.321822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.321848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.795 [2024-04-26 14:25:21.322053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.322272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.795 [2024-04-26 14:25:21.322298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.795 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.322490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.322736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.322785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.322982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.323190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.323215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.323384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.323590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.323644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.323760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.324040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.324102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.324280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.324508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.324556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.324710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.324899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.324960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.325113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.325298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.325348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.325513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.325715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.325742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.325939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.326101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.326153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.326254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.326349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.326376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.326484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.326690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.326718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.326880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.327128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.327176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.327354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.327624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.327667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.327788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.327885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.327910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.328019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.328110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.328135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.328272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.328361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.328386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.328485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.328625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.328657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.328789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.328902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.328927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.329023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.329127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.329152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.329282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.329410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.329437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.329530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.329655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.329688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.329792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.329935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.329969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.330103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.330318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.330344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.796 qpair failed and we were unable to recover it. 00:20:39.796 [2024-04-26 14:25:21.330487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.796 [2024-04-26 14:25:21.330644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.330690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.330851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.331074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.331124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.331257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.331384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.331411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.331514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.331688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.331738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.331850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.332004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.332060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.332202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.332368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.332420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.332650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.332787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.332813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.332944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.333160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.333208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.333393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.333524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.333556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.333701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.333865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.333895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.334033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.334171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.334199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.334333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.334516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.334562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.334673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.334800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.334843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.334972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.335107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.335141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.335250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.335347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.335375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.335479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.335595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.335659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.335799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.335940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.335966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.336114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.336353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.336380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.336503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.336651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.336693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.336828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.337010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.337051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.337156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.337326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.337355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.337618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.337798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.337856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.338010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.338179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.338234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.338368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.338530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.797 [2024-04-26 14:25:21.338583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.797 qpair failed and we were unable to recover it. 00:20:39.797 [2024-04-26 14:25:21.338755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.338994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.339041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.339169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.339326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.339385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.339548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.339790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.339817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.339973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.340112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.340163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.340333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.340521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.340577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.340700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.340862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.340927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.341087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.341231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.341263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.341514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.341612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.341645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.341812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.341992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.342043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.342165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.342265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.342291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.342502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.342659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.342686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.342787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.342920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.342945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.343078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.343199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.343260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.343405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.343554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.343579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.343716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.343913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.343972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.344125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.344286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.344338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.344446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.344563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.344589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.344818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.345017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.345061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.345256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.345431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.345491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.345684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.345850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.345877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.346012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.346151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.346176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.346415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.346509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.346536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.346648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.346769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.346826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.798 qpair failed and we were unable to recover it. 00:20:39.798 [2024-04-26 14:25:21.347020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.798 [2024-04-26 14:25:21.347200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.347251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.799 qpair failed and we were unable to recover it. 00:20:39.799 [2024-04-26 14:25:21.347381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.347532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.347559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.799 qpair failed and we were unable to recover it. 00:20:39.799 [2024-04-26 14:25:21.347682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.347809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.347834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.799 qpair failed and we were unable to recover it. 00:20:39.799 [2024-04-26 14:25:21.347964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.348107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.348163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.799 qpair failed and we were unable to recover it. 00:20:39.799 [2024-04-26 14:25:21.348351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.348503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.348563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.799 qpair failed and we were unable to recover it. 00:20:39.799 [2024-04-26 14:25:21.348670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.348811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.348853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.799 qpair failed and we were unable to recover it. 00:20:39.799 [2024-04-26 14:25:21.348970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.349100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.349124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.799 qpair failed and we were unable to recover it. 00:20:39.799 [2024-04-26 14:25:21.349237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.349425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.349481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.799 qpair failed and we were unable to recover it. 00:20:39.799 [2024-04-26 14:25:21.349652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.349817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.349869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.799 qpair failed and we were unable to recover it. 00:20:39.799 [2024-04-26 14:25:21.350012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.350161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.350212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.799 qpair failed and we were unable to recover it. 00:20:39.799 [2024-04-26 14:25:21.350395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.350516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.350542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.799 qpair failed and we were unable to recover it. 00:20:39.799 [2024-04-26 14:25:21.350657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.350782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.350808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.799 qpair failed and we were unable to recover it. 00:20:39.799 [2024-04-26 14:25:21.350906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.351087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.351140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.799 qpair failed and we were unable to recover it. 00:20:39.799 [2024-04-26 14:25:21.351306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.351497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.351551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.799 qpair failed and we were unable to recover it. 00:20:39.799 [2024-04-26 14:25:21.351674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.351842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.351886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.799 qpair failed and we were unable to recover it. 00:20:39.799 [2024-04-26 14:25:21.352074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.352179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.352206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.799 qpair failed and we were unable to recover it. 00:20:39.799 [2024-04-26 14:25:21.352354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.352443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.352469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.799 qpair failed and we were unable to recover it. 00:20:39.799 [2024-04-26 14:25:21.352571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.352674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.799 [2024-04-26 14:25:21.352700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.799 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.352831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.352948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.352974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.353106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.353234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.353265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.353360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.353455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.353480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.353576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.353672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.353699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.353821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.353930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.353960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.354087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.354214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.354241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.354347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.354441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.354466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.354561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.354654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.354681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.354785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.354893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.354919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.355044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.355220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.355274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.355369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.355463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.355498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.355657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.355847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.355914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.356081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.356177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.356203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.356317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.356517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.356567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.356664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.356860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.356914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.357025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.357169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.357222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.357334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.357560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.357617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.357751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.357948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.358003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.358142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.358339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.358367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.358525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.358698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.358757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.358995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.359094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.359121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.359276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.359379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.800 [2024-04-26 14:25:21.359409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:39.800 qpair failed and we were unable to recover it. 00:20:39.800 [2024-04-26 14:25:21.359509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.359650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.359690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.359833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.359965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.359999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.360104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.360246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.360294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.360442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.360540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.360566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.360672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.360798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.360830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.361012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.361105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.361130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.361317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.361457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.361509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.361671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.361841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.361890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.362003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.362121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.362172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.362323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.362429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.362459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.362554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.362656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.362692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.362800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.362908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.362932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.363150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.363258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.363282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.363431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.363547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.363571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.363753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.363873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.363899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.364047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.364178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.364204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.364332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.364566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.364592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.364744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.364856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.364881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.364989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.365156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.365217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.365353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.365482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.365535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.365679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.365820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.365851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.366013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.366169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.366220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.366318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.366534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.366560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.366658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.366803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.366829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.366927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.367077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.367141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.367300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.367439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.367489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.367629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.367740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.367767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.076 qpair failed and we were unable to recover it. 00:20:40.076 [2024-04-26 14:25:21.367866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.368014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.076 [2024-04-26 14:25:21.368075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.368177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.368325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.368352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.368444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.368572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.368599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.368744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.368976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.369003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.369101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.369238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.369291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.369387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.369478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.369503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.369598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.369727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.369778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.369898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.370055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.370106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.370227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.370346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.370372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.370503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.370625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.370682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.370902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.371030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.371081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.371182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.371331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.371393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.371487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.371580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.371605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.371771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.371957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.372006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.372097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.372256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.372315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.372491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.372618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.372653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.372851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.373006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.373063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.373269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.373407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.373451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.373548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.373648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.373675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.373778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.373882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.373909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.374057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.374175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.374202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.374360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.374461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.374486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.374588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.374726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.374776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.374893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.375044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.375096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.375213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.375379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.375404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.375495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.375697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.375751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.375892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.376048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.376101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.376230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.376410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.376462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.376556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.376659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.376685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.077 qpair failed and we were unable to recover it. 00:20:40.077 [2024-04-26 14:25:21.376814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.077 [2024-04-26 14:25:21.377023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.377089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.377207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.377442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.377492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.377619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.377770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.377822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.377918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.378042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.378122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.378254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.378405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.378458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.378562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.378653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.378686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.378814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.378976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.379035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.379175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.379321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.379375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.379471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.379570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.379595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.379757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.379873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.379898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.379996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.380125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.380178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.380271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.380403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.380457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.380556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.380722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.380775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.380885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.381071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.381123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.381223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.381314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.381340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.381504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.381647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.381695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.381796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.381979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.382030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.382163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.382282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.382307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.382438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.382689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.382741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.382890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.383007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.383033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.383248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.383369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.383393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.383518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.383658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.383703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.383862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.384050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.384109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.384240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.384439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.384488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.384692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.384964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.385017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.385263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.385446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.385502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.385679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.385834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.385861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.385958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.386050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.386075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.386267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.386439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.078 [2024-04-26 14:25:21.386503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.078 qpair failed and we were unable to recover it. 00:20:40.078 [2024-04-26 14:25:21.386628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.386868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.386893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.387051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.387178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.387203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.387297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.387430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.387483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.387591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.387778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.387841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.387983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.388171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.388222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.388349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.388496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.388549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.388716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.388811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.388836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.389005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.389216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.389241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.389361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.389518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.389570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.389679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.389891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.389916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.390065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.390196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.390223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.390404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.390598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.390623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.390755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.390912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.390964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.391122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.391285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.391367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.391463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.391564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.391589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.391783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.391985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.392038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.392161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.392329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.392380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.392533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.392706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.392763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.392907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.393046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.393071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.393244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.393415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.393465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.393566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.393743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.393769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.393917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.394044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.394069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.394222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.394401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.394463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.394559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.394725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.394776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.394876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.394966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.394992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.395105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.395325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.395382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.395585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.395715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.395742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.395919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.396090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.396151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.079 [2024-04-26 14:25:21.396267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.396395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.079 [2024-04-26 14:25:21.396420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.079 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.396546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.396678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.396705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.396873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.397026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.397053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.397169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.397391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.397440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.397533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.397659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.397702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.397830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.398019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.398068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.398187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.398319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.398382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.398575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.398675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.398702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.398806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.398915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.398973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.399091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.399265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.399316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.399442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.399590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.399615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.399752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.399898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.399922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.400018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.400125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.400182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.400272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.400389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.400413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.400502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.400650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.400676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.400789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.400931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.400987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.401153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.401314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.401365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.401494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.401688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.401730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.401826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.401989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.402037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.402147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.402362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.402387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.402520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.402670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.402718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.402893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.403039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.403087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.403239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.403329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.403355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.403517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.403660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.403726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.080 qpair failed and we were unable to recover it. 00:20:40.080 [2024-04-26 14:25:21.403863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.080 [2024-04-26 14:25:21.403972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.403997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.404167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.404294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.404335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.404484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.404659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.404685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.404806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.405015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.405068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.405227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.405407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.405472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.405570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.405682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.405709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.405866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.406067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.406091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.406209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.406356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.406419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.406584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.406763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.406816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.406930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.407089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.407145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.407328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.407482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.407536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.407627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.407776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.407842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.407995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.408179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.408241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.408428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.408638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.408663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.408776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.408979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.409005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.409098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.409188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.409213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.409353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.409499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.409546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.409654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.409777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.409819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.410001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.410098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.410124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.410265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.410401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.410462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.410563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.410656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.410682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.410772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.410914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.410966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.411087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.411306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.411355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.411446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.411539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.411566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.411659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.411809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.411841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.412041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.412180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.412234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.412374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.412535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.412589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.412779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.412925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.412975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.413068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.413155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.413180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.081 qpair failed and we were unable to recover it. 00:20:40.081 [2024-04-26 14:25:21.413328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.413418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.081 [2024-04-26 14:25:21.413443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.413541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.413686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.413712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.413896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.414062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.414114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.414264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.414407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.414470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.414565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.414689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.414738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.414926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.415151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.415204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.415342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.415501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.415553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.415642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.415850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.415902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.415995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.416143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.416206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.416397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.416567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.416636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.416834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.417018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.417072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.417288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.417408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.417435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.417647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.417840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.417888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.418046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.418167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.418192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.418367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.418489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.418542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.418654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.418744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.418769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.418900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.419073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.419099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.419256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.419413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.419469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.419596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.419689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.419715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.419910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.420123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.420150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.420294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.420407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.420467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.420560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.420697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.420749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.420881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.421085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.421134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.421282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.421466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.421491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.421617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.421836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.421886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.422081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.422246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.422271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.422401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.422518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.422543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.422649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.422743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.422770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.422899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.423129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.423180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.082 [2024-04-26 14:25:21.423369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.423489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.082 [2024-04-26 14:25:21.423515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.082 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.423645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.423833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.423884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.424002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.424211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.424260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.424429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.424560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.424584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.424685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.424873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.424899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.424994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.425171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.425196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.425386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.425531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.425584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.425680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.425879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.425931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.426048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.426224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.426252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.426345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.426453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.426513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.426611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.426808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.426857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.426985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.427203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.427253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.427448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.427560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.427586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.427703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.427891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.427940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.428124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.428314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.428339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.428497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.428694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.428721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.428944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.429064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.429089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.429269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.429464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.429520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.429656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.429854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.429911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.430051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.430195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.430249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.430407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.430528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.430553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.430718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.430893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.430920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.431017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.431198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.431247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.431376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.431502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.431562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.431684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.431856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.431905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.432016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.432206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.432258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.432405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.432547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.432574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.432680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.432824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.432874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.433049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.433259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.433306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.083 [2024-04-26 14:25:21.433442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.433564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.083 [2024-04-26 14:25:21.433588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.083 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.433723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.433891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.433953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.434049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.434192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.434243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.434421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.434561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.434613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.434741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.434886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.434946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.435094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.435247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.435298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.435428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.435578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.435605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.435767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.435914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.435966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.436161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.436341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.436400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.436540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.436768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.436821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.436963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.437086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.437112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.437248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.437432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.437486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.437639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.437770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.437823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.437972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.438179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.438229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.438353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.438556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.438604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.438757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.438939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.438995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.439124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.439238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.439262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.439410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.439537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.439587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.439754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.439924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.439973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.440120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.440239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.440263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.440396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.440516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.440540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.440639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.440794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.440846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.441027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.441143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.441169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.441267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.441469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.441517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.441609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.441728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.441781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.441874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.442001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.442080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.442230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.442423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.442448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.442559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.442730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.442785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.442878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.443007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.443045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.443244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.443363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.084 [2024-04-26 14:25:21.443388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.084 qpair failed and we were unable to recover it. 00:20:40.084 [2024-04-26 14:25:21.443517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.443674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.443728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.443843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.444030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.444089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.444225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.444424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.444477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.444574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.444776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.444826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.444957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.445099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.445156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.445315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.445445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.445472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.445569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.445738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.445790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.445922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.446038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.446062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.446212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.446390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.446436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.446566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.446683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.446713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.446848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.447004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.447056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.447256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.447372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.447397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.447573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.447678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.447705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.447796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.447992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.448018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.448176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.448330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.448410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.448504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.448689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.448715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.448834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.449037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.449088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.449204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.449378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.449432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.449615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.449746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.449771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.449915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.450009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.450034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.450217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.450393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.450442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.450606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.450751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.450807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.450952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.451040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.451064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.451173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.451303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.451357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.085 [2024-04-26 14:25:21.451496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.451652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.085 [2024-04-26 14:25:21.451698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.085 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.451827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.452079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.452128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.452219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.452391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.452440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.452591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.452816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.452866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.452985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.453195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.453243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.453383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.453586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.453643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.453744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.453905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.453960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.454088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.454252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.454278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.454455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.454597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.454652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.454776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.454998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.455051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.455213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.455376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.455432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.455585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.455721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.455779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.455874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.456010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.456035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.456232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.456407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.456459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.456576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.456785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.456811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.456905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.457048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.457100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.457265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.457436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.457492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.457688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.457849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.457907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.458041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.458205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.458258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.458423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.458581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.458640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.458805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.458967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.459018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.459195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.459371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.459399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.459545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.459649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.459676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.459774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.459932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.459987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.460083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.460221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.460273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.460388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.460601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.460627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.460791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.461021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.461077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.461193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.461308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.461338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.461464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.461605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.461673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.086 [2024-04-26 14:25:21.461853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.461997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.086 [2024-04-26 14:25:21.462056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.086 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.462161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.462309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.462375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.462546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.462714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.462762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.462911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.463069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.463149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.463303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.463454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.463510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.463659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.463833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.463892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.464030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.464222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.464273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.464433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.464657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.464701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.464836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.465004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.465055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.465165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.465264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.465289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.465377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.465554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.465581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.465685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.465847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.465902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.466047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.466192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.466240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.466401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.466536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.466584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.466720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.466928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.466991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.467084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.467227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.467252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.467379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.467649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.467706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.467911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.468078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.468126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.468237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.468402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.468454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.468606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.468775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.468855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.469053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.469219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.469274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.469367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.469460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.469485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.469666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.469833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.469858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.469951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.470063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.470118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.470315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.470458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.470482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.470596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.470724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.470750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.470898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.471044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.087 [2024-04-26 14:25:21.471094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.087 qpair failed and we were unable to recover it. 00:20:40.087 [2024-04-26 14:25:21.471186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.471298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.471357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.471537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.471651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.471677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.471791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.471914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.471940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.472081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.472225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.472293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.472452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.472616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.472662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.472753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.472867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.472919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.473071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.473192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.473241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.473364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.473499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.473556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.473647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.473767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.473816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.473947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.474105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.474155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.474316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.474485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.474539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.474642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.474751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.474809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.474917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.475066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.475119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.475274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.475403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.475428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.475520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.475653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.475701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.475829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.475991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.476016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.476160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.476315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.476367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.476522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.476664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.476691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.476814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.476987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.477038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.477162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.477305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.477358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.477556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.477692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.477750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.477852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.477999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.478024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.478114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.478253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.478300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.478434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.478575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.478600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.478716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.478865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.478921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.479012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.479098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.479123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.479234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.479416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.479480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.479609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.479734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.479759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.479874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.479998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.480024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.088 qpair failed and we were unable to recover it. 00:20:40.088 [2024-04-26 14:25:21.480215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.088 [2024-04-26 14:25:21.480392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.480454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.480580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.480677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.480703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.480839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.481003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.481057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.481236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.481386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.481435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.481580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.481748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.481803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.481922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.482099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.482126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.482257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.482383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.482409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.482502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.482591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.482617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.482810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.482948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.483002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.483100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.483228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.483279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.483434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.483559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.483585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.483762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.483913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.483994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.484179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.484318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.484345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.484529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.484647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.484674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.484823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.484926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.484951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.485074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.485245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.485269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.485404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.485522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.485547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.485681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.485831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.485856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.485980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.486184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.486233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.486360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.486581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.486646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.486826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.486934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.486959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.089 qpair failed and we were unable to recover it. 00:20:40.089 [2024-04-26 14:25:21.487077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.487285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.089 [2024-04-26 14:25:21.487338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.487458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.487659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.487709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.487826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.487997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.488062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.488185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.488300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.488325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.488418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.488561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.488587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.488744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.488879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.488904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.489024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.489211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.489263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.489395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.489545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.489602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.489801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.489971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.490020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.490180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.490407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.490459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.490588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.490755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.490806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.490945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.491077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.491135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.491284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.491429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.491494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.491618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.491751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.491777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.491897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.492040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.492090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.492266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.492416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.492468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.492556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.492656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.492683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.492816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.492999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.493049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.493178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.493330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.493378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.493492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.493694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.493719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.493861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.493991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.494050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.494175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.494384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.494411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.494508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.494688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.494736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.494930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.495083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.495128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.495325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.495476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.495526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.495653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.495821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.495882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.495995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.496142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.496193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.496361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.496563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.496613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.496769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.496900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.496926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.090 qpair failed and we were unable to recover it. 00:20:40.090 [2024-04-26 14:25:21.497059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.090 [2024-04-26 14:25:21.497211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.497260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.497388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.497491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.497515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.497672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.497826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.497891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.497996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.498171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.498223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.498310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.498498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.498545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.498644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.498784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.498850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.498979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.499150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.499201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.499367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.499564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.499614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.499741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.499916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.499978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.500100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.500226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.500286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.500440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.500571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.500595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.500737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.500945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.500970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.501115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.501256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.501308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.501444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.501590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.501655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.501851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.501965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.501992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.502105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.502258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.502310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.502443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.502589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.502653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.502823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.502961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.503022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.503198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.503327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.503411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.503592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.503777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.503831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.503963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.504146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.504198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.504401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.504499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.504525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.504657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.504832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.504886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.505040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.505270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.505325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.505454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.505618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.505678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.505831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.506026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.506053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.091 [2024-04-26 14:25:21.506238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.506355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.091 [2024-04-26 14:25:21.506381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.091 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.506478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.506571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.506597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.506699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.506884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.506934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.507078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.507194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.507247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.507382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.507534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.507585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.507737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.507826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.507852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.508047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.508167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.508193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.508346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.508497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.508555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.508666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.508797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.508853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.508943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.509087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.509112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.509226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.509378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.509432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.509526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.509677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.509703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.509855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.510080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.510106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.510231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.510458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.510511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.510694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.510899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.510948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.511073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.511254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.511307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.511419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.511564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.511589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.511723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.511968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.512020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.512177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.512326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.512384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.512500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.512669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.512710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.512801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.512901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.512928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.513055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.513187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.513239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.513354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.513499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.513554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.513650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.513769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.513821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.513914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.514032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.514088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.514184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.514274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.514300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.514395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.514493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.514519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.514616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.514778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.514832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.515011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.515123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.092 [2024-04-26 14:25:21.515148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.092 qpair failed and we were unable to recover it. 00:20:40.092 [2024-04-26 14:25:21.515271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.515484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.515535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.515656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.515812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.515866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.515985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.516103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.516128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.516291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.516402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.516455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.516547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.516643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.516669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.516809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.516928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.516990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.517138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.517285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.517332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.517530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.517659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.517685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.517839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.518006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.518093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.518221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.518371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.518397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.518586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.518737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.518794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.518970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.519179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.519229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.519345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.519489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.519543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.519676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.519807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.519861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.519992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.520151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.520204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.520298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.520468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.520518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.520700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.520887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.520945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.521046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.521146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.521173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.521275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.521401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.521487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.521620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.521783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.521832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.521934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.522030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.522056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.522223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.522424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.522477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.522579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.522684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.522740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.522835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.522924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.522948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.523046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.523136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.523160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.523258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.523423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.523477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.523576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.523758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.523812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.523902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.523997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.524023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.093 [2024-04-26 14:25:21.524116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.524236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.093 [2024-04-26 14:25:21.524266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.093 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.524371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.524534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.524584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.524698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.524838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.524895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.524991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.525133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.525184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.525316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.525459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.525486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.525578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.525671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.525697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.525824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.525921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.525947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.526117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.526233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.526259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.526352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.526462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.526520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.526643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.526774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.526836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.526940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.527037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.527068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.527161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.527307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.527332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.527426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.527515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.527540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.527699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.527913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.527965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.528081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.528235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.528288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.528404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.528523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.528548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.528643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.528770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.528820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.528941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.529079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.529132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.529272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.529449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.529512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.529629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.529807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.529858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.530014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.530177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.530226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.530364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.530487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.530512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.530611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.530710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.530736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.094 qpair failed and we were unable to recover it. 00:20:40.094 [2024-04-26 14:25:21.530825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.094 [2024-04-26 14:25:21.530997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.531048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.531261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.531378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.531404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.531528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.531668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.531712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.531813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.531911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.531939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.532028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.532126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.532151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.532248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.532341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.532367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.532493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.532588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.532615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.532817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.532942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.532969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.533156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.533314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.533366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.533500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.533644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.533671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.533806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.533981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.534006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.534174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.534335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.534384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.534512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.534690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.534717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.534891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.535011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.535037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.535165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.535281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.535307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.535420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.535544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.535569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.535675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.535770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.535796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.535927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.536042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.536067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.536168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.536290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.536342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.536433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.536527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.536551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.536643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.536743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.536769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.536917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.537038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.537064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.537180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.537302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.537327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.537450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.537543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.537567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.537685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.537893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.537941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.538054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.538264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.538314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.538405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.538497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.538522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.538620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.538746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.538801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.538930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.539096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.095 [2024-04-26 14:25:21.539148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.095 qpair failed and we were unable to recover it. 00:20:40.095 [2024-04-26 14:25:21.539294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.539479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.539504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.539678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.539810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.539861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.540018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.540129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.540184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.540296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.540419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.540445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.540535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.540644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.540685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.540785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.540911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.540964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.541091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.541242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.541294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.541443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.541531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.541557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.541650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.541801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.541859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.542000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.542170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.542218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.542351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.542497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.542522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.542696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.542865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.542891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.542987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.543181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.543233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.543367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.543532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.543586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.543717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.543887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.543937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.544052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.544176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.544201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.544295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.544419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.544444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.544572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.544672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.544699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.544859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.545020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.545084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.545286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.545489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.545514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.545670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.545819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.545885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.546034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.546246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.546295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.546427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.546540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.546567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.546671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.546794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.546836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.546931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.547028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.547055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.547224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.547348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.547410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.547537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.547684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.547749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.547911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.548066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.548115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.096 [2024-04-26 14:25:21.548213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.548326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.096 [2024-04-26 14:25:21.548376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.096 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.548512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.548657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.548692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.548851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.549018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.549072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.549179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.549328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.549363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.549461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.549560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.549587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.549691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.549826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.549853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.550015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.550223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.550276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.550417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.550546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.550574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.550676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.550827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.550894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.551063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.551283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.551336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.551504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.551691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.551719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.551903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.552086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.552113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.552243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.552471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.552521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.552665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.552860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.552916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.553014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.553132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.553186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.553329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.553494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.553557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.553708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.553845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.553904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.554026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.554186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.554237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.554337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.554454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.554508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.554662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.554804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.554885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.555046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.555234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.555292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.555396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.555582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.555645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.555739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.555837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.555864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.555964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.556052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.556077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.556169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.556339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.556364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.556459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.556552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.556579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.556706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.556930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.556955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.557111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.557341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.557387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.557509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.557673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.557714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.097 qpair failed and we were unable to recover it. 00:20:40.097 [2024-04-26 14:25:21.557861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.557950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.097 [2024-04-26 14:25:21.557976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.558089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.558277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.558303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.558425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.558580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.558605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.558765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.558935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.558973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.559092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.559212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.559237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.559348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.559506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.559556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.559713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.559857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.559912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.560047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.560204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.560283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.560415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.560563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.560589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.560732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.560892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.560946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.561092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.561244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.561297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.561406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.561532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.561557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.561661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.561759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.561785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.561966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.562130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.562155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.562252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.562386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.562437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.562533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.562623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.562656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.562816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.562992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.563018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.563138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.563297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.563353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.563454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.563597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.563622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.563827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.564026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.564051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.564173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.564313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.564366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.564500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.564701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.564727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.564853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.565018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.565069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.565190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.565336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.565388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.565493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.565664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.565715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.565889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.566060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.566112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.566206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.566298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.566324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.566461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.566606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.566641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.566744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.566888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.566955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.567052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.567145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.567170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.098 qpair failed and we were unable to recover it. 00:20:40.098 [2024-04-26 14:25:21.567301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.098 [2024-04-26 14:25:21.567533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.567584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.567767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.567911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.567966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.568114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.568281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.568365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.568486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.568704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.568732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.568890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.569046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.569102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.569196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.569321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.569402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.569501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.569726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.569754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.569924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.570113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.570160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.570282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.570400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.570426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.570526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.570682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.570710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.570808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.570934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.570985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.571164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.571303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.571350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.571479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.571624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.571661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.571757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.571900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.571950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.572152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.572351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.572375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.572514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.572675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.572732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.572860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.573013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.573054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.573176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.573429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.573454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.573585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.573723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.573786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.573934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.574089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.574116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.574270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.574402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.574428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.574525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.574734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.574761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.574900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.575057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.575115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.575355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.575450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.575476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.575583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.575702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.575753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.575869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.576068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.576120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.576313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.576479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.099 [2024-04-26 14:25:21.576529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.099 qpair failed and we were unable to recover it. 00:20:40.099 [2024-04-26 14:25:21.576624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.576781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.576834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.576933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.577049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.577109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.577352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.577447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.577472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.577566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.577653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.577679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.577770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.577865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.577891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.578012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.578188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.578245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.578382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.578501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.578527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.578699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.578827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.578870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.578980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.579135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.579204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.579371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.579551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.579603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.579728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.579884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.579966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.580117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.580348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.580400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.580593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.580781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.580841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.581022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.581264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.581290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.581389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.581566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.581622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.581785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.581973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.582000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.582159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.582392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.582443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.582542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.582650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.582677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.582811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.583012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.583061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.583154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.583284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.583338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.583570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.583734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.583785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.583925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.584046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.584073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.584261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.584401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.584454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.584553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.584655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.584681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.584789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.584934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.584991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.585091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.585228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.585254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.585381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.585555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.585613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.585758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.585959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.586010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.586250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.586348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.586374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.586504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.586649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.586705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.586827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.586969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.587032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.587149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.587371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.587421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.100 qpair failed and we were unable to recover it. 00:20:40.100 [2024-04-26 14:25:21.587563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.587721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.100 [2024-04-26 14:25:21.587769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.587900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.588099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.588150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.588300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.588467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.588515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.588692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.588881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.588935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.589130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.589341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.589367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.589502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.589616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.589649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.589844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.589981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.590040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.590142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.590336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.590383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.590640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.590767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.590848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.590944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.591068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.591150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.591294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.591473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.591535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.591638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.591849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.591902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.592034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.592232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.592282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.592413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.592586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.592657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.592908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.593099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.593149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.593284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.593456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.593518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.593617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.593722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.593748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.593870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.594016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.594066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.594187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.594344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.594392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.594533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.594625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.594658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.594769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.594947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.595004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.595136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.595236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.595271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.595381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.595509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.595563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.595664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.595777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.595840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.595982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.596142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.596228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.596355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.596471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.596496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.596647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.596779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.596859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.596985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.597123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.597178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.597307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.597494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.597542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.597649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.597778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.597833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.597946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.598118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.598166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.598312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.598512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.598538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.598675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.598845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.598894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.598982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.599135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.599161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.101 qpair failed and we were unable to recover it. 00:20:40.101 [2024-04-26 14:25:21.599312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.599541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.101 [2024-04-26 14:25:21.599593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.599750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.599864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.599920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.600043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.600184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.600238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.600352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.600539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.600597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.600806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.600960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.601043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.601134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.601228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.601255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.601399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.601556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.601608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.601754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.601891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.601949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.602084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.602207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.602233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.602401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.602647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.602691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.602789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.602971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.603024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.603117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.603263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.603289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.603422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.603620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.603654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.603813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.603982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.604032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.604134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.604304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.604329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.604470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.604593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.604618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.604788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.605023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.605050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.605144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.605286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.605310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.605510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.605678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.605705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.605902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.606053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.606133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.606254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.606413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.606464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.606562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.606659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.606685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.606822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.606967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.607018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.607174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.607328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.607378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.607498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.607661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.607688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.607814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.607967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.608016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.608117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.608227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.608285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.608431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.608537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.608562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.608668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.608765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.608790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.608938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.609065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.609090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.609263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.609369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.609402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.609505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.609678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.609705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.609800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.609895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.609921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.610024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.610120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.102 [2024-04-26 14:25:21.610145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.102 qpair failed and we were unable to recover it. 00:20:40.102 [2024-04-26 14:25:21.610249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.610442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.610494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.610647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.610778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.610829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.610992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.611122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.611149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.611359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.611553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.611604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.611803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.611936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.611992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.612160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.612292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.612318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.612418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.612510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.612540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.612638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.612780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.612844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.612939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.613101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.613151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.613258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.613417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.613476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.613578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.613689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.613745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.613843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.613976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.614028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.614155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.614352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.614402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.614516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.614692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.614750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.614907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.615104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.615156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.615305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.615467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.615529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.615655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.615781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.615831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.615948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.616113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.616159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.616309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.616454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.616502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.616716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.616886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.616953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.617067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.617263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.617315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.617491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.617682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.617710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.617906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.618118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.618167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.618302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.618496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.618546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.618664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.618917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.618969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.619085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.619213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.619241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.619475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.619608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.619681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.619785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.619884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.619911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.103 qpair failed and we were unable to recover it. 00:20:40.103 [2024-04-26 14:25:21.620005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.620106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.103 [2024-04-26 14:25:21.620132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.620230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.620383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.620441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.620565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.620781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.620829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.620980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.621123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.621150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.621344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.621497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.621544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.621709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.621852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.621877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.622064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.622281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.622332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.622470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.622693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.622719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.622847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.623003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.623028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.623131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.623268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.623318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.623453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.623600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.623625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.623742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.623892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.623925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.624126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.624259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.624286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.624406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.624552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.624596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.624790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.624968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.625030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.625246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.625478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.625524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.625653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.625837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.625873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.626017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.626203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.626274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.626498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.626640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.626669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.626818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.627002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.627056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.627224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.627411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.627439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.627548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.627731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.627761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.627923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.628137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.628192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.628332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.628539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.628572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.628692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.628805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.104 [2024-04-26 14:25:21.628836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.104 qpair failed and we were unable to recover it. 00:20:40.104 [2024-04-26 14:25:21.629000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.629213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.629245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.629431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.629558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.629591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.629717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.629882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.629913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.630046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.630232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.630278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.630393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.630505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.630537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.630690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.630818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.630852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.630963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.631069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.631096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.631201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.631318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.631376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.631484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.631606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.631670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.631824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.631979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.632030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.632186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.632306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.632334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.632442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.632545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.632572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.632680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.632789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.632816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.632915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.633012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.633046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.633166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.633382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.633435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.633543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.633685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.633732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.633888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.634036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.634083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.634224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.634410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.634457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.634562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.634662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.634691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.634852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.634955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.634984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.635092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.635186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.635212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.635326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.635470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.635497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.635602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.635726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.635771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.635900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.636047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.636100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.636230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.636429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.636458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.636599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.636756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.636812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.636934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.637094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.637138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.637285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.637524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.637550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.637719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.637850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.637881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.638008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.638148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.638191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.638292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.638390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.638417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.638511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.638693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.638722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.638925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.639113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.639140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.639239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.639344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.639389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.639520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.639643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.639670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.639826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.639964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.640016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.640116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.640221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.640248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.640342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.640457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.640499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.640607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.640815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.640869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.640986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.641126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.641178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.641286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.641449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.641498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.641620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.641798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.641850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.641991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.642118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.642145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.642252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.642370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.642415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.642553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.642672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.642703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.642806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.642910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.642937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.643059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.643204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.643236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.643377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.643519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.643568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.643663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.643792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.643841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.643954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.644090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.644147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.644265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.644379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.644420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.383 qpair failed and we were unable to recover it. 00:20:40.383 [2024-04-26 14:25:21.644517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.644638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.383 [2024-04-26 14:25:21.644719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.644869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.645100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.645142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.645283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.645397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.645423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.645538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.645679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.645706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.645824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.645992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.646035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.646164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.646326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.646376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.646471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.646600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.646625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.646736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.646853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.646905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.647069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.647231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.647284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.647379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.647493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.647546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.647789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.647898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.647930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.648092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.648319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.648345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.648439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.648531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.648556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.648659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.648788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.648815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.648962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.649069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.649101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.649283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.649412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.649450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.649542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.649649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.649681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.649797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.649976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.650001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.650111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.650246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.650288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.650427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.650551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.650578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.650756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.650899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.650931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.651072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.651168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.651193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.651310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.651455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.651505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.651625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.651791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.651842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.652025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.652198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.652251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.652360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.652507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.652533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.652628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.652805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.652831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.652930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.653030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.653058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.653171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.653293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.653320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.653450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.653547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.653575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.653689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.653891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.653916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.654013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.654134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.654175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.654265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.654368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.654394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.654493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.654588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.654614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.654758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.654850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.654876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.654983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.655105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.655131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.655258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.655403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.655429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.655519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.655643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.655685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.655804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.655948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.656006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.656157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.656305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.656360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.656491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.656590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.656617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.656808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.656947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.656989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.657114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.657253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.657305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.657427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.657551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.657582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.657690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.657820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.657873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.658025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.658159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.658185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.658305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.658475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.658523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.658619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.658745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.658792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.658885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.659049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.659096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.659204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.659327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.659358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.659471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.659565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.659592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.659735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.659941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.659992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.660120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.660312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.660338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.660522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.660656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.660705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.660857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.660970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.384 [2024-04-26 14:25:21.660995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.384 qpair failed and we were unable to recover it. 00:20:40.384 [2024-04-26 14:25:21.661115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.661247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.661295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.661444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.661562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.661587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.661707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.661845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.661879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.662015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.662152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.662198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.662368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.662546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.662588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.662769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.662902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.662934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.663043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.663169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.663222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.663321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.663414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.663439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.663535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.663659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.663690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.663806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.663960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.664040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.664161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.664321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.664373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.664463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.664558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.664584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.664744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.664864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.664916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.665040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.665179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.665226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.665398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.665516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.665541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.665670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.665794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.665841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.665932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.666052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.666099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.666206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.666332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.666384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.666484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.666586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.666618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.666750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.666880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.666907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.667090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.667216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.667263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.667399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.667550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.667575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.667671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.667812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.667856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.667968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.668091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.668122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.668251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.668372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.668398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.668495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.668605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.668660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.668795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.668926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.668957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.669081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.669225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.669271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.669394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.669561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.669586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.669704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.669859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.669904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.670034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.670165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.670218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.670410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.670537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.670581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.670721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.670851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.670902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.671047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.671203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.671242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.671349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.671462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.671510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.671713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.671838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.671889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.671980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.672075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.672102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.672311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.672498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.672524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.672666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.672809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.672856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.673008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.673128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.673154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.673264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.673464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.673489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.673612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.673737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.673776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.673890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.674014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.674060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.674160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.674284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.674365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.674475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.674608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.674640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.674774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.674964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.675011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.675108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.675254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.675282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.675398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.675532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.385 [2024-04-26 14:25:21.675557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.385 qpair failed and we were unable to recover it. 00:20:40.385 [2024-04-26 14:25:21.675662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.675760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.675786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.675887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.676006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.676045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.676184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.676381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.676406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.676508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.676609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.676644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.676768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.676940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.676986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.677104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.677283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.677307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.677422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.677557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.677582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.677775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.677917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.677964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.678081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.678216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.678271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.678382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.678496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.678522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.678657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.678767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.678793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.678978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.679119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.679166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.679326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.679465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.679519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.679643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.679791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.679831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.679960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.680075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.680102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.680217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.680388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.680434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.680525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.680687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.680715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.680876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.681024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.681069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.681203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.681318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.681345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.681460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.681584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.681627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.681768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.681919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.681965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.682070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.682173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.682199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.682320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.682467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.682493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.682592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.682697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.682724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.682841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.683009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.683054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.683161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.683292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.683328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.683440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.683568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.683594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.683711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.683827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.683853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.683984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.684109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.684136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.684252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.684373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.684421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.684536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.684653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.684683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.684804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.684922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.684966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.685099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.685251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.685289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.685430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.685538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.685563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.685668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.685807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.685853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.685951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.686073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.686115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.686215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.686351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.686395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.686492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.686599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.686652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.686764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.686899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.686945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.687046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.687167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.687209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.687302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.687411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.687436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.687547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.687653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.687680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.687775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.687880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.687911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.688047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.688197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.688256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.688391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.688526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.688551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.688660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.688791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.688841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.688945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.689035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.689060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.689157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.689260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.689323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.689457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.689585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.689611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.689726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.689839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.689872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.689976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.690083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.690111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.690228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.690332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.690359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.690463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.690580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.690622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.690741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.690859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.690904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.691021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.691172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.691222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.386 qpair failed and we were unable to recover it. 00:20:40.386 [2024-04-26 14:25:21.691332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.386 [2024-04-26 14:25:21.691462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.691503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.691642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.691747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.691774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.691902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.692039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.692089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.692215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.692355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.692400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.692537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.692640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.692666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.692764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.692866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.692900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.693022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.693125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.693150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.693264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.693377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.693402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.693530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.693655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.693700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.693835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.693953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.693979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.694098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.694239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.694282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.694410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.694521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.694546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.694694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.694830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.694870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.694983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.695109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.695133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.695260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.695364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.695388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.695511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.695663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.695704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.695852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.695970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.696001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.696104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.696217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.696260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.696413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.696528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.696553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.696655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.696801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.696846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.696939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.697047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.697078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.697223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.697357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.697405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.697501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.697619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.697670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.697813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.697920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.697945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.698060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.698188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.698219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.698363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.698477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.698502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.698624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.698806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.698848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.698974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.699121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.699162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.699279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.699412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.699453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.699552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.699679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.699719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.699830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.699958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.699987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.700100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.700200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.700229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.700340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.700470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.700552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.700670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.700831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.700873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.700966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.701121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.701168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.701307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.701517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.701564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.701659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.701789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.701835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.701958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.702084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.702123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.702250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.702391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.702434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.702527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.702664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.702691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.702804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.702943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.702986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.703186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.703332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.703382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.703541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.703730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.703778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.703957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.704109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.704155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.704261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.704411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.704458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.704551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.704653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.704679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.704775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.704893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.704935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.705095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.705232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.705273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.705395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.705523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.705554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.705684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.705849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.705889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.706010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.706123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.387 [2024-04-26 14:25:21.706149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.387 qpair failed and we were unable to recover it. 00:20:40.387 [2024-04-26 14:25:21.706294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.706449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.706491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.706604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.706747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.706790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.706906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.707055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.707100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.707222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.707453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.707504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.707604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.707705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.707731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.707823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.707978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.708005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.708137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.708324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.708408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.708534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.708677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.708706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.708861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.709004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.709050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.709162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.709294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.709326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.709442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.709555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.709587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.709725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.709828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.709853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.709982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.710145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.710187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.710317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.710450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.710494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.710653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.710775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.710805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.710953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.711078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.711119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.711238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.711372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.711417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.711538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.711655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.711682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.711826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.711969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.712011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.712124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.712235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.712260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.712369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.712477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.712502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.712615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.712801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.712851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.712951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.713085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.713115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.713237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.713367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.713408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.713540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.713642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.713673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.713774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.713890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.713934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.714091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.714249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.714301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.714423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.714541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.714568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.714665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.714779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.714809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.714937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.715076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.715121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.715263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.715385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.715415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.715561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.715671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.715697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.715846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.716040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.716091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.716218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.716358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.716398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.716579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.716704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.716732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.716922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.717086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.717138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.717253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.717406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.717457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.717557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.717656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.717683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.717795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.717991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.718022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.718133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.718229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.718254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.718353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.718453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.718480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.718604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.718757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.718783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.718890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.719006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.719037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.719168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.719349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.719375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.719527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.719653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.719681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.719860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.720078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.720126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.720303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.720483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.720531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.720654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.720786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.720845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.721032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.721151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.721178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.721317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.721441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.721466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.721570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.721676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.721704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.721798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.721945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.721984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.722086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.722189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.722215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.722334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.722442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.722468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.722571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.722660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.722685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.388 qpair failed and we were unable to recover it. 00:20:40.388 [2024-04-26 14:25:21.722815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.388 [2024-04-26 14:25:21.722912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.722939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.723106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.723228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.723254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.723439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.723561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.723587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.723768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.723938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.723982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.724085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.724202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.724229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.724333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.724459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.724503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.724649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.724779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.724824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.724947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.725074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.725101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.725220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.725408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.725459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.725562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.725659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.725686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.725816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.725943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.725990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.726100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.726197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.726224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.726341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.726465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.726492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.726615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.726786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.726818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.726978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.727109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.727140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.727306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.727444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.727476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.727610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.727771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.727797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.727907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.728045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.728086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.728218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.728345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.728376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.728492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.728617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.728665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.728789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.728924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.728954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.729090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.729250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.729292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.729403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.729511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.729536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.729674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.729803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.729844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.729980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.730117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.730156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.730258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.730400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.730442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.730536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.730692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.730724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.730842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.730958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.731004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.731122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.731242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.731273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.731410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.731520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.731545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.731663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.731796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.731837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.731983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.732112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.732157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.732272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.732434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.732476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.732573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.732674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.732701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.732799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.732904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.732929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.733032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.733164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.733189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.733326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.733430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.733456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.733550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.733657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.733688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.733800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.733949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.733991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.734089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.734199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.734241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.734336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.734431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.734456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.734571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.734686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.734718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.734857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.734982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.735012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.735141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.735273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.735315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.735430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.735546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.735573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.735687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.735834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.735876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.735997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.736120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.736149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.736318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.736431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.736472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.736566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.736679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.736727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.736844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.736983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.737011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.737139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.737269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.737297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.737450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.737566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.737593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.737723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.737845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.737870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.737996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.738137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.738189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.738318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.738421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.738447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.389 [2024-04-26 14:25:21.738540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.738638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.389 [2024-04-26 14:25:21.738664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.389 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.738766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.738951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.738991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.739121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.739225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.739251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.739351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.739459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.739484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.739601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.739761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.739791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.739902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.740020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.740045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.740158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.740292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.740334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.740473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.740623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.740686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.740851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.741052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.741078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.741179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.741379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.741419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.741535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.741713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.741739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.741925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.742052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.742083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.742198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.742345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.742375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.742496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.742623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.742677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.742789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.742932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.742970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.743097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.743322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.743368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.743494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.743651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.743687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.743815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.743961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.744017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.744128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.744295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.744322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.744493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.744728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.744756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.744920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.745043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.745075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.745237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.745373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.745422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.745544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.745657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.745691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.745796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.745920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.745946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.746044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.746142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.746170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.746268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.746381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.746425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.746528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.746659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.746686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.746834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.746948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.746989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.747101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.747211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.747253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.747386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.747516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.747600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.747760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.747890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.747917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.748035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.748165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.748192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.748387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.748502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.748529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.748698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.748839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.748894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.749011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.749161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.749220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.749330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.749458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.749517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.749659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.749807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.749852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.749967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.750075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.750101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.750220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.750366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.750392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.750489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.750580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.750606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.750757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.750885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.750912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.751013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.751108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.751134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.751247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.751374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.751399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.751501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.751594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.751620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.751724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.751820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.751845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.751972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.752115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.752174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.752302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.752411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.752438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.752542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.752696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.752753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.752953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.753092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.753124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.753262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.753370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.390 [2024-04-26 14:25:21.753395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.390 qpair failed and we were unable to recover it. 00:20:40.390 [2024-04-26 14:25:21.753490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.753604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.753654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.753766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.753892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.753918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.754034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.754184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.754242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.754386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.754501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.754527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.754672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.754768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.754795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.754887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.755004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.755049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.755170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.755341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.755391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.755523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.755696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.755727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.755839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.755985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.756027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.756144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.756304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.756330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.756423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.756522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.756548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.756669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.756820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.756846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.757005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.757147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.757186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.757286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.757377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.757402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.757502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.757621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.757671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.757765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.757877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.757918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.758019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.758143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.758184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.758297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.758412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.758442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.758569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.758665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.758691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.758862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.759018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.759070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.759179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.759288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.759314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.759422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.759537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.759564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.759664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.759821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.759865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.759997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.760127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.760170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.760270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.760364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.760389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.760491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.760592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.760618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.760756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.760931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.760976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.761092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.761239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.761327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.761437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.761536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.761562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.761663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.761827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.761880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.761976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.762093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.762133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.762258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.762398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.762440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.762540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.762649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.762682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.762844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.762953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.762996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.763116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.763250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.763276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.763381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.763478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.763504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.763639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.763757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.763785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.763883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.764045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.764076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.764199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.764321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.764364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.764455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.764546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.764575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.764708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.764840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.764884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.764981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.765109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.765152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.765279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.765409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.765434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.765534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.765660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.765701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.765843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.766034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.766065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.766202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.766334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.766374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.766525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.766660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.766687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.766780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.766986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.767052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.767197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.767321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.767347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.767451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.767557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.767583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.767690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.767825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.767861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.768006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.768119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.768144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.768252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.768370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.768411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.768509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.768621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.768673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.768769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.768883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.391 [2024-04-26 14:25:21.768941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.391 qpair failed and we were unable to recover it. 00:20:40.391 [2024-04-26 14:25:21.769058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.769177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.769224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.769330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.769432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.769457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.769550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.769681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.769741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.769896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.770026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.770052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.770165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.770259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.770285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.770385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.770476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.770502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.770622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.770765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.770812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.770951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.771078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.771112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.771249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.771385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.771430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.771553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.771650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.771678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.771797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.771926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.771959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.772098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.772247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.772297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.772419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.772516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.772542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.772660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.772798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.772840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.772984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.773133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.773187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.773288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.773382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.773409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.773507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.773615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.773678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.773797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.773961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.774007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.774152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.774303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.774349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.774481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.774580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.774605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.774764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.774893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.774938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.775036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.775167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.775209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.775336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.775471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.775515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.775622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.775729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.775756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.775862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.775981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.776026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.776143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.776248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.776274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.776384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.776499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.776526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.776653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.776792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.776837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.776974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.777107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.777151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.777258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.777372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.777399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.777495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.777592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.777619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.777746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.777895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.777935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.778035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.778149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.778182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.778296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.778385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.778411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.778506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.778658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.778690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.778843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.778967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.778994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.779103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.779241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.779286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.779401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.779508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.779533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.779659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.779793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.779836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.779958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.780087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.780130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.780248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.780387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.780442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.780541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.780648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.780675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.780814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.780952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.780995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.781099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.781212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.781241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.781382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.781565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.781619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.781802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.781936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.781980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.782113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.782253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.782294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.782421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.782557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.782598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.782730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.782838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.782864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.392 [2024-04-26 14:25:21.782970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.783068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.392 [2024-04-26 14:25:21.783096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.392 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.783202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.783303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.783329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.783456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.783578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.783604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.783704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.783826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.783868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.784014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.784193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.784255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.784356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.784448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.784474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.784580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.784678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.784705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.784833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.785001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.785054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.785172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.785357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.785415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.785540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.785668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.785712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.785853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.785994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.786050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.786170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.786286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.786324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.786446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.786544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.786576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.786690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.786828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.786875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.787007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.787145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.787189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.787300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.787432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.787475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.787571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.787685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.787719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.787848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.787983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.788028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.788134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.788271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.788313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.788428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.788558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.788584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.788706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.788856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.788900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.789025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.789188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.789219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.789338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.789445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.789470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.789570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.789692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.789738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.789859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.789992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.790034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.790146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.790259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.790285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.790411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.790519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.790544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.790668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.790785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.790817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.790944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.791086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.791127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.791241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.791351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.791378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.791494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.791613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.791661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.791787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.791915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.791948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.792077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.792220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.792250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.792367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.792466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.792492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.792604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.792751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.792795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.792915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.793043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.793088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.793211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.793342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.793369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.793485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.793601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.793655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.793752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.793866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.793927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.794076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.794193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.794220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.794328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.794424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.794449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.794542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.794636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.794662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.794803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.794964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.795012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.795131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.795279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.795333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.795445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.795562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.795591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.795702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.795804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.795835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.795963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.796114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.796156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.796275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.796408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.796439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.796549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.796654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.796681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.796777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.796892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.796937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.797058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.797181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.797212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.797345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.797447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.797473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.797586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.797702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.797735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.797846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.797978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.798031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.798157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.798347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.798405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.393 qpair failed and we were unable to recover it. 00:20:40.393 [2024-04-26 14:25:21.798516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.393 [2024-04-26 14:25:21.798609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.798640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.798767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.798926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.798971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.799085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.799244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.799290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.799422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.799535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.799562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.799665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.799763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.799789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.799909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.800034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.800065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.800176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.800282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.800314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.800448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.800556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.800583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.800691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.800827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.800859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.801006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.801131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.801178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.801280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.801395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.801437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.801530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.801657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.801701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.801820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.801963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.801994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.802127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.802255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.802312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.802439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.802547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.802572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.802662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.802762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.802787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.802916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.803097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.803152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.803264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.803378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.803405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.803511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.803606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.803637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.803738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.803834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.803864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.803973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.804077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.804104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.804199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.804292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.804318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.804411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.804506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.804533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.804662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.804805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.804865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.805018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.805156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.805188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.805330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.805467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.805494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.805600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.805709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.805741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.805879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.806006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.806050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.806172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.806317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.806361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.806453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.806550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.806578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.806716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.806863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.806917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.807031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.807122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.807147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.807251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.807377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.807456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.807552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.807644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.807671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.807768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.807927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.807979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.808125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.808259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.808286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.808400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.808518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.808543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.808663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.808756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.808781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.808911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.809040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.809066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.809169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.809286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.809326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.809472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.809592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.809617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.809728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.809835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.809880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.810010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.810135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.810168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.810291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.810423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.810466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.810557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.810656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.810683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.810797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.810936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.810980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.811097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.811239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.811281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.811406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.811522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.811548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.811671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.811789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.811863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.811984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.812134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.812180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.812304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.812450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.812533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.812675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.812802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.812861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.813009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.813172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.813226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.813359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.813473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.813499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.813627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.813798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.813831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.813979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.814085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.394 [2024-04-26 14:25:21.814112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.394 qpair failed and we were unable to recover it. 00:20:40.394 [2024-04-26 14:25:21.814223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.814375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.814419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.814520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.814614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.814645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.814743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.814865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.814908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.815027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.815139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.815164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.815278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.815400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.815426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.815525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.815653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.815698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.815813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.815966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.816016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.816148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.816276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.816318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.816442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.816551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.816577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.816678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.816771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.816797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.816887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.816979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.817005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.817096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.817195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.817223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.817320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.817409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.817434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.817538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.817655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.817687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.817829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.817952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.817979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.818090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.818204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.818248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.818363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.818477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.818504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.818621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.818772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.818817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.818983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.819131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.819171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.819281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.819413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.819462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.819552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.819643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.819670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.819834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.820001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.820029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.820126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.820238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.820293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.820410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.820517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.820543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.820691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.820864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.820906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.821007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.821113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.821146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.821277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.821408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.821451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.821547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.821657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.821684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.821796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.821930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.821976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.822112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.822231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.822258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.822377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.822495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.822521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.822636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.822772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.822805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.822917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.823022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.823064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.823258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.823456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.823504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.823613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.823734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.823764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.823897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.824027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.824054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.824149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.824273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.824326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.824441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.824551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.824577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.824671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.824819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.824878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.825036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.825201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.825254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.825403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.825540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.825577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.825640] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124c1c0 (9): Bad file descriptor 00:20:40.395 [2024-04-26 14:25:21.825783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.825913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.825941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.826068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.826178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.826204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.826337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.826466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.826498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.826625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.826744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.826771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.826917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.827059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.827087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.827204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.827291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.827316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.827429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.827566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.827591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.827710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.827843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.827901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.828009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.828148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.828180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.395 qpair failed and we were unable to recover it. 00:20:40.395 [2024-04-26 14:25:21.828338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.395 [2024-04-26 14:25:21.828484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.828525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.828650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.828789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.828832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.828925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.829039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.829083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.829188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.829340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.829372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.829506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.829640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.829680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.829775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.829889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.829933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.830061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.830176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.830202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.830323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.830437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.830462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.830571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.830678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.830705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.830813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.830911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.830938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.831037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.831135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.831162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.831290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.831411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.831444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.831558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.831661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.831689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.831817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.831961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.832041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.832190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.832305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.832331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.832428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.832520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.832548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.832676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.832852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.832912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.833027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.833195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.833254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.833363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.833486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.833512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.833654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.833821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.833875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.834000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.834143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.834180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.834290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.834407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.834442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.834555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.834668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.834695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.834805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.834944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.834986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.835124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.835264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.835314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.835434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.835556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.835582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.835699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.835811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.835853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.836017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.836115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.836140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.836263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.836426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.836454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.836549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.836694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.836722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.836822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.836947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.837030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.837160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.837325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.837368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.837479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.837591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.837616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.837724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.837840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.837885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.838049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.838198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.838244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.838363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.838466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.838491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.838587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.838733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.838800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.838915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.839081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.839139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.839265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.839402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.839448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.839550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.839659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.839687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.839809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.839979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.840030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.840135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.840284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.840333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.840455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.840567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.840592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.840729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.840875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.840921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.841054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.841216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.841275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.841408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.841520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.841546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.841647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.841771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.841853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.841948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.842066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.842120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.842242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.842386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.842470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.842565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.842653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.842680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.842790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.842950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.843003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.843116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.843248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.843293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.843422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.843539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.843563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.843704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.843872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.843924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.844050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.844176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.844209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.844361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.844510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.396 [2024-04-26 14:25:21.844556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.396 qpair failed and we were unable to recover it. 00:20:40.396 [2024-04-26 14:25:21.844647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.844796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.844857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.845036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.845247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.845299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.845410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.845542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.845588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.845740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.845849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.845927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.846018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.846142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.846194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.846286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.846372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.846398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.846494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.846620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.846652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.846809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.846974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.847029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.847153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.847315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.847378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.847490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.847701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.847748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.847875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.848027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.848069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.848190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.848370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.848422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.848526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.848614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.848645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.848744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.848868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.848908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.849025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.849164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.849212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.849319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.849429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.849454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.849563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.849653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.849679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.849812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.849959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.850004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.850114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.850241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.850305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.850459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.850617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.850675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.850801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.850972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.851022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.851206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.851327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.851353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.851477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.851609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.851669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.851802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.851954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.852008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.852201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.852376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.852429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.852532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.852708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.852769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.852870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.853039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.853066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.853157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.853293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.853345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.853472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.853628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.853669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.853766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.853891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.853943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.854067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.854207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.854247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.854341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.854484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.854538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.854640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.854775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.854856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.854974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.855113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.855175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.855318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.855440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.855501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.855677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.855811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.855859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.855951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.856045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.856072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.856161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.856273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.856329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.856454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.856544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.856574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.856753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.856904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.856947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.857072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.857235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.857290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.857384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.857480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.857507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.857600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.857731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.857788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.857900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.858112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.858165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.858351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.858467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.858522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.858650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.858794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.858836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.858949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.859089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.859137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.859235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.859347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.859394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.859518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.859616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.859661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.859815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.860012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.860062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.397 [2024-04-26 14:25:21.860232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.860380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.397 [2024-04-26 14:25:21.860463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.397 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.860559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.860658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.860685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.860779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.860920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.860969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.861155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.861272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.861298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.861417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.861562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.861587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.861704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.861858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.861911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.862072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.862230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.862275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.862409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.862583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.862608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.862798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.862965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.863018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.863189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.863352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.863433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.863562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.863735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.863799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.863893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.864010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.864064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.864180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.864340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.864394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.864530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.864651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.864682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.864804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.864943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.864991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.865090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.865210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.865257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.865406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.865574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.865601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.865760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.865859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.865886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.866013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.866159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.866201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.866397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.866536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.866590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.866712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.866839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.866887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.867002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.867145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.867199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.867332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.867516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.867559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.867663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.867789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.867838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.868015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.868130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.868187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.868343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.868472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.868498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.868609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.868752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.868778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.868897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.869074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.869139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.869230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.869427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.869480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.869599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.869774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.869827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.869921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.870065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.870132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.870258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.870414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.870495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.870591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.870706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.870759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.870857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.870991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.871044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.871140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.871317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.871365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.871455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.871546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.871572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.871756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.871924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.871989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.872095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.872218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.872274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.872409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.872522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.872548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.872678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.872914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.872968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.873099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.873367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.873421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.873528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.873657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.873698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.873831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.873964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.874018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.874199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.874345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.874394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.874525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.874649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.874676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.874776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.874872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.874900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.875027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.875180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.875206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.875319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.875421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.875447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.875541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.875645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.875672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.875781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.875892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.875918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.876012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.876110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.876136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.876256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.876357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.876385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.876481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.876576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.876602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.876718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.876846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.876872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.876997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.877088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.877114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.398 qpair failed and we were unable to recover it. 00:20:40.398 [2024-04-26 14:25:21.877213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.877318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.398 [2024-04-26 14:25:21.877345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.877439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.877531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.877556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.877653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.877752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.877778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.877885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.877982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.878010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.878108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.878205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.878231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.878334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.878424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.878458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.878587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.878692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.878720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.878817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.878946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.878972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.879062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.879173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.879199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.879398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.879517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.879544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.879658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.879807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.879866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.880000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.880142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.880190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.880317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.880455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.880505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.880703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.880926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.880977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.881152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.881306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.881333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.881451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.881576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.881601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.881712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.881812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.881838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.881946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.882069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.882094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.882212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.882304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.882330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.882429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.882515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.882541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.882638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.882769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.882794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.882919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.883007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.883032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.883152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.883336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.883386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.883475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.883567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.883591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.883693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.883796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.883823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.883918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.884041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.884067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.884172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.884260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.884285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.884374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.884469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.884494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.884616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.884721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.884746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.884868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.884968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.884993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.885080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.885212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.885237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.885327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.885417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.885443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.885579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.885709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.885736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.885832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.885928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.885954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.886052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.886147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.886180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.886277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.886449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.886496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.886593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.886693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.886721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.886822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.886911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.886937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.887032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.887153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.887178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.887315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.887411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.887438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.887534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.887644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.887670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.887760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.887852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.887879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.887972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.888063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.888088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.888212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.888314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.888339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.888431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.888525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.888551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.888650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.888742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.888767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.888860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.888954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.888980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.889072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.889166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.889191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.889320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.889417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.399 [2024-04-26 14:25:21.889442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.399 qpair failed and we were unable to recover it. 00:20:40.399 [2024-04-26 14:25:21.889551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.889654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.889682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.889809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.889897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.889922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.890033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.890122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.890148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.890244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.890335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.890360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.890448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.890569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.890594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.890712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.890837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.890862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.890968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.891073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.891097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.891190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.891293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.891319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.891417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.891507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.891532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.891715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.891840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.891908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.892063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.892223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.892273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.892399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.892517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.892544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.892675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.892790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.892815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.892933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.893044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.893069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.893231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.893378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.893458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.893549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.893648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.893677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.893780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.893876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.893902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.893993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.894122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.894148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.894236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.894331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.894357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.894449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.894543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.894569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.894690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.894835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.894887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.895016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.895186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.895238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.895364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.895540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.895565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.895659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.896368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.896399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.896501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.896591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.896616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.896746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.896872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.896899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.897010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.897179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.897230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.897352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.897502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.897528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.897662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.897814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.897895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.898024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.898169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.898214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.898405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.898619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.898711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.898835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.898963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.899015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.899184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.899292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.899322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.899423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.899532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.899559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.899669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.899845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.899896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.900011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.900155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.900181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.900275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.900392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.900446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.900579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.900759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.900804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.900953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.901044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.901070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.901222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.901372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.901418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.901516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.901707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.901733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.901834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.901928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.901953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.902062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.902213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.902266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.902368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.902489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.902543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.902645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.902773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.902826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.903010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.903162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.903187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.903317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.903473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.903503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.903607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.903716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.903743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.903850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.903940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.903966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.904123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.904256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.904303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.904429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.904551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.904578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.904746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.904865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.904891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.400 qpair failed and we were unable to recover it. 00:20:40.400 [2024-04-26 14:25:21.905084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.400 [2024-04-26 14:25:21.905208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.905234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.905322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.906003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.906034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.906139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.906236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.906262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.906362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.906454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.906480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.906573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.906675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.906706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.906836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.906923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.906948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.907051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.907157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.907193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.907336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.907443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.907469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.907568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.907668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.907695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.907888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.907985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.908010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.908105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.908250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.908303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.908420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.908581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.908606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.908799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.908950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.909005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.909103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.909342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.909369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.909465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.909563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.909593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.909700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.909813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.909840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.909934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.910058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.910097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.910224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.910337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.910366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.910472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.910659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.910708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.910841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.910933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.910959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.911112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.911209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.911235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.911406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.911547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.911594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.911701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.911881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.911930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.912042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.912175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.912203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.912297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.912395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.912426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.912523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.912617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.912652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.912755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.912893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.912920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.913028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.913155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.913193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.913327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.913458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.913485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.913582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.913681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.913710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.913857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.913988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.914027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.914204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.914328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.914358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.914457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.914562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.914592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.914825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.914968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.915023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.915205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.915367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.915417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.915530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.915622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.915657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.915754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.915865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.915921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.916096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.916210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.916235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.916329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.916420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.916445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.916541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.916638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.916665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.916766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.916937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.916986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.917085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.917178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.917204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.917318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.917455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.917512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.917609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.917787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.917812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.917911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.917999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.918023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.918146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.918271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.918298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.918395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.918517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.918542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.918641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.918728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.918752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.918845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.918937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.918961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.919081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.919205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.919229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.919326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.919414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.919439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.919563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.919665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.919691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.919791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.919881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.919905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.401 qpair failed and we were unable to recover it. 00:20:40.401 [2024-04-26 14:25:21.919999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.920094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.401 [2024-04-26 14:25:21.920119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.920215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.920304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.920329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.920424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.920516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.920541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.920641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.920805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.920852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.920946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.921035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.921060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.921205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.921415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.921464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.921558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.921661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.921694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.921784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.921904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.921928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.922057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.922171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.922224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.922319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.922431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.922460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.922593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.922737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.922793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.922974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.923153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.923203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.923301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.923409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.923436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.923560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.923698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.923729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.923867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.923991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.924018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.924129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.924262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.924320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.924454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.924668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.924715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.924958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.925149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.925176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.925272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.925389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.925442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.925562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.925695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.925722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.925892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.926094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.926149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.926251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.926357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.926389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.926519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.926758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.926790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.926894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.926995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.927022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.927134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.927298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.927339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.927448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.927558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.927584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.927708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.927859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.927889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.928007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.928105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.928130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.928259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.928363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.928389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.928491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.928602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.928663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.928842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.928943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.928970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.929068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.929252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.929300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.929418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.929544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.929575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.929688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.929817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.929844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.929948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.930100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.930125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.930259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.930368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.930394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.930494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.930619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.930687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.930833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.931041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.931071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.402 [2024-04-26 14:25:21.931213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.931438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.402 [2024-04-26 14:25:21.931467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.402 qpair failed and we were unable to recover it. 00:20:40.680 [2024-04-26 14:25:21.932363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.680 [2024-04-26 14:25:21.932482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.680 [2024-04-26 14:25:21.932510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.680 qpair failed and we were unable to recover it. 00:20:40.680 [2024-04-26 14:25:21.932713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.680 [2024-04-26 14:25:21.932857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.680 [2024-04-26 14:25:21.932907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.680 qpair failed and we were unable to recover it. 00:20:40.680 [2024-04-26 14:25:21.933044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.680 [2024-04-26 14:25:21.933164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.680 [2024-04-26 14:25:21.933192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.680 qpair failed and we were unable to recover it. 00:20:40.680 [2024-04-26 14:25:21.933359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.680 [2024-04-26 14:25:21.933474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.680 [2024-04-26 14:25:21.933505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.680 qpair failed and we were unable to recover it. 00:20:40.680 [2024-04-26 14:25:21.933629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.680 [2024-04-26 14:25:21.933795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.680 [2024-04-26 14:25:21.933841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.680 qpair failed and we were unable to recover it. 00:20:40.680 [2024-04-26 14:25:21.933962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.680 [2024-04-26 14:25:21.934182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.680 [2024-04-26 14:25:21.934209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.680 qpair failed and we were unable to recover it. 00:20:40.680 [2024-04-26 14:25:21.934306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.680 [2024-04-26 14:25:21.934421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.680 [2024-04-26 14:25:21.934465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.680 qpair failed and we were unable to recover it. 00:20:40.680 [2024-04-26 14:25:21.934565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.934700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.934752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.934890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.934983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.935009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.935170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.935328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.935359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.935480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.935589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.935619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.935777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.935917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.935947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.936058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.936168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.936195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.936293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.936388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.936417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.936527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.936644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.936670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.936768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.936865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.936890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.936981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.937080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.937106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.937209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.937295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.937320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.937420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.937539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.937565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.937663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.937770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.937795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.937924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.938045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.938071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.938172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.938282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.938340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.938484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.938628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.938663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.938761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.938852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.938879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.938981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.939074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.939101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.939195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.939281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.939305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.939397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.939492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.939518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.939616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.939709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.939735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.939833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.939931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.939957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.940068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.940192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.940243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.940351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.940438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.940463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.940553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.940656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.940681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.681 qpair failed and we were unable to recover it. 00:20:40.681 [2024-04-26 14:25:21.940792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.681 [2024-04-26 14:25:21.940955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.941008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.941100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.941186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.941211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.941325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.941417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.941442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.941534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.941636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.941662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.941765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.941863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.941888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.941998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.942125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.942176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.942273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.942444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.942494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.942594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.942737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.942790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.942892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.943785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.943816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.943941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.944115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.944152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.944251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.944363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.944421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.944520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.944609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.944642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.944748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.944869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.944922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.945015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.945117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.945142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.945236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.945329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.945354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.945446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.945536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.945561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.945665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.945789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.945815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.945928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.946033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.946063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.946191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.946378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.946403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.946498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.946622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.946654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.946750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.946910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.946939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.947072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.947303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.947350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.947471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.947584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.947609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.947716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.947834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.947898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.948040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.948179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.948223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.948338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.948459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.948519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.948676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.948795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.948836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.948938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.949062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.949117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.682 qpair failed and we were unable to recover it. 00:20:40.682 [2024-04-26 14:25:21.949220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.949327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.682 [2024-04-26 14:25:21.949351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.949459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.949551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.949577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.950260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.950367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.950394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.950486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.950575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.950600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.950750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.950900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.950946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.951101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.951253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.951300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.951396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.951485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.951510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.951605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.951724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.951751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.951862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.951981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.952031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.952134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.952240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.952266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.952362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.952453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.952478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.952581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.952694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.952719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.952823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.952954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.953006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.953131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.953237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.953262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.953380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.953497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.953521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.953626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.953724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.953750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.953847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.953942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.953968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.954067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.954159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.954184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.954288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.954374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.954399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.954524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.954613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.954645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.954763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.954892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.954920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.955025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.955121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.955146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.955265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.955377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.955401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.955495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.955583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.955608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.955704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.955842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.955898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.955994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.956082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.683 [2024-04-26 14:25:21.956107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.683 qpair failed and we were unable to recover it. 00:20:40.683 [2024-04-26 14:25:21.956203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.956289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.956313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.956428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.956513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.956538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.957249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.957357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.957384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.957486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.958137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.958166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.958272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.958365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.958390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.958481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.958572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.958598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.958735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.958874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.958930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.959055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.959168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.959194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.959284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.959400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.959447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.959553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.959660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.959690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.959797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.959897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.959924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.960022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.960116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.960143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.960239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.960371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.960424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.960523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.960616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.960649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.960746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.960856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.960915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.961062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.961224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.961278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.961397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.961529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.961555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.961654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.961750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.961775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.961911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.962062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.962114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.962239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.962395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.962449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.962564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.962674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.962701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.962820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.963005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.963062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.963160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.963300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.963356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.963461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.963561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.963588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.963703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.963801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.963828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.963954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.964103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.964154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.964253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.964363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.964419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.964532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.964628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.964662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.684 [2024-04-26 14:25:21.964762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.964856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.684 [2024-04-26 14:25:21.964881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.684 qpair failed and we were unable to recover it. 00:20:40.685 [2024-04-26 14:25:21.964971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.965072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.965100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.685 qpair failed and we were unable to recover it. 00:20:40.685 [2024-04-26 14:25:21.965230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.965351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.965376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.685 qpair failed and we were unable to recover it. 00:20:40.685 [2024-04-26 14:25:21.965484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.965580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.965605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.685 qpair failed and we were unable to recover it. 00:20:40.685 [2024-04-26 14:25:21.965741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.965865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.965890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.685 qpair failed and we were unable to recover it. 00:20:40.685 [2024-04-26 14:25:21.966012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.966160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.966202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.685 qpair failed and we were unable to recover it. 00:20:40.685 [2024-04-26 14:25:21.966340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.966462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.966488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.685 qpair failed and we were unable to recover it. 00:20:40.685 [2024-04-26 14:25:21.966584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.966685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.966711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.685 qpair failed and we were unable to recover it. 00:20:40.685 [2024-04-26 14:25:21.966845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.966967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.966993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.685 qpair failed and we were unable to recover it. 00:20:40.685 [2024-04-26 14:25:21.967112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.967263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.967321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.685 qpair failed and we were unable to recover it. 00:20:40.685 [2024-04-26 14:25:21.967450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.967554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.967579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.685 qpair failed and we were unable to recover it. 00:20:40.685 [2024-04-26 14:25:21.967672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.967809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.967860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.685 qpair failed and we were unable to recover it. 00:20:40.685 [2024-04-26 14:25:21.967957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.968080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.968133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.685 qpair failed and we were unable to recover it. 00:20:40.685 [2024-04-26 14:25:21.968252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.968402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.968462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.685 qpair failed and we were unable to recover it. 00:20:40.685 [2024-04-26 14:25:21.968555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.968647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.968673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.685 qpair failed and we were unable to recover it. 00:20:40.685 [2024-04-26 14:25:21.968787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.968928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.968987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.685 qpair failed and we were unable to recover it. 00:20:40.685 [2024-04-26 14:25:21.969092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.969215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.969256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.685 qpair failed and we were unable to recover it. 00:20:40.685 [2024-04-26 14:25:21.969369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.969483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.969509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.685 qpair failed and we were unable to recover it. 00:20:40.685 [2024-04-26 14:25:21.969661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.969782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.969808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.685 qpair failed and we were unable to recover it. 00:20:40.685 [2024-04-26 14:25:21.969912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.970047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.970101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.685 qpair failed and we were unable to recover it. 00:20:40.685 [2024-04-26 14:25:21.970241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.685 [2024-04-26 14:25:21.970361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.970387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.970480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.970581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.970607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.970709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.970827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.970877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.970995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.971135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.971184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.971333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.971479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.971505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.971646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.971796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.971845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.971995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.972148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.972194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.972320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.972431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.972456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.972552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.972649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.972675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.972819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.972984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.973011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.973111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.973251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.973278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.973397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.973539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.973596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.973704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.973800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.973826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.973965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.974093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.974117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.974242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.974379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.974416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.974545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.974688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.974744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.974841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.974933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.974957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.975093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.975239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.975264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.975404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.975528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.975553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.975658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.975781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.975829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.975950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.976124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.976188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.976318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.976448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.976493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.976617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.976759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.976784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.977569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.977683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.977710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.977806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.977898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.977923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.978053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.978165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.978190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.978307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.978479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.978530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.978623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.978767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.978793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.686 qpair failed and we were unable to recover it. 00:20:40.686 [2024-04-26 14:25:21.978892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.686 [2024-04-26 14:25:21.978979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.979005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.979142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.979287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.979336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.979468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.979614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.979669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.979789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.979900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.979926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.980052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.980192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.980238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.980368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.980486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.980511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.980669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.980761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.980786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.980883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.980986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.981012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.981126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.981217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.981242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.981340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.981445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.981470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.981574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.981699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.981728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.981833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.981950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.981975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.982082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.982193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.982222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.982323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.982420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.982447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.982547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.982645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.982672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.982763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.982858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.982883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.982997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.983086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.983111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.983205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.983299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.983324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.983425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.983519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.983546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.983659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.983769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.983795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.983888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.983989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.984017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.984110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.984207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.984235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.984349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.984447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.984473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.984585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.984685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.984712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.984833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.984938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.984964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.985078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.985180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.985205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.985299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.985390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.985415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.985505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.988808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.988834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.988932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.989020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.687 [2024-04-26 14:25:21.989045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.687 qpair failed and we were unable to recover it. 00:20:40.687 [2024-04-26 14:25:21.989143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.989237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.989262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.989389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.989479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.989504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.989609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.989713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.989739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.989852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.989945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.989972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.990081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.990176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.990201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.990308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.990415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.990440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.990551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.990657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.990683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.990781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.990878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.990904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.991008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.991135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.991162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.991289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.991376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.991401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.991503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.991609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.991644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.991742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.991841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.991868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.991973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.992070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.992096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.992194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.992306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.992332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.992427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.992531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.992558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.992672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.992784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.992815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.992911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.993006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.993032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.993125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.993215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.993240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.993335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.993432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.993458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.993548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.993656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.993683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.993781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.993877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.993903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.994003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.994094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.994119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.994217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.994321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.994347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.994455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.994548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.994573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.994673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.994774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.994799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.994895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.994988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.995018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.688 qpair failed and we were unable to recover it. 00:20:40.688 [2024-04-26 14:25:21.995137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.688 [2024-04-26 14:25:21.995230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.995255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.995364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.995453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.995479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.995572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.995676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.995703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.995811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.995917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.995942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.996038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.996129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.996155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.996256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.996346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.996370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.996461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.996552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.996576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.996668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.996762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.996787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.996880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.996974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.996999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.997098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.997193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.997224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.997326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.997419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.997444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.997544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.997653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.997680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.997780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.997887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.997912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.998009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.998106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.998133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.998230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.998331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.998358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.998450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.998569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.998594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.998706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.998799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.998825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.998921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.999009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.999035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.999136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.999231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.999256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.999366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.999463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.999489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.999593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.999701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.999729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:21.999825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.999920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:21.999947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:22.000046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:22.000145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:22.000173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:22.000266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:22.000355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:22.000380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:22.000485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:22.000578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:22.000603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:22.000703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:22.000798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:22.000823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:22.000930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:22.001021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:22.001047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:22.001160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:22.001254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:22.001281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:22.001392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:22.001486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:22.001512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.689 qpair failed and we were unable to recover it. 00:20:40.689 [2024-04-26 14:25:22.001606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.689 [2024-04-26 14:25:22.001724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.001750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.001849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.001960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.001986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.002085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.002175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.002201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.002306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.002406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.002432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.002524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.002615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.002650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.002743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.002839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.002864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.002958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.003053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.003079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.003178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.003287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.003312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.003441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.003534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.003560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.003653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.003745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.003770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.003877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.003979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.004007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.004115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.004209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.004236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.004330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.004417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.004443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.004553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.004656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.004683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.004789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.004877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.004903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.005004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.005098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.005124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.005235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.005325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.005350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.005441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.005549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.005574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.005675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.005767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.005792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.005885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.005990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.006016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.006111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.006207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.006231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.006328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.006428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.006452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.006551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.006650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.006677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.006766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.006859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.690 [2024-04-26 14:25:22.006884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.690 qpair failed and we were unable to recover it. 00:20:40.690 [2024-04-26 14:25:22.006985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.007086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.007112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.007208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.007297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.007323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.007431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.007522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.007547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.007657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.007772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.007798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.007899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.008014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.008039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.008152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.008241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.008266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.008373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.008470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.008497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.008598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.008713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.008739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.008835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.008938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.008964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.009080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.009174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.009200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.009308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.009403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.009429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.009525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.009622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.009655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.009754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.009853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.009878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.009966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.010058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.010083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.010176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.010263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.010288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.010383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.010472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.010497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.010589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.010687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.010714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.010822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.010913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.010939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.011056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.011153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.011178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.011290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.011376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.011400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.011495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.011591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.011616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.011718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.011808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.011833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.011925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.012019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.012044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.012135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.012227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.012253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.012359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.012451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.012476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.012577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.012678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.012704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.012796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.012888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.012914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.013024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.013118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.013143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.013242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.013335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.691 [2024-04-26 14:25:22.013360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.691 qpair failed and we were unable to recover it. 00:20:40.691 [2024-04-26 14:25:22.013460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.013558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.013585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.013692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.013786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.013812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.013925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.014019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.014044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.014153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.014245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.014270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.014380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.014468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.014493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.014595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.014700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.014727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.014819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.014913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.014940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.015044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.015152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.015177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.015284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.015392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.015417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.015508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.015606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.015645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.015748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.015848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.015873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.015966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.016053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.016078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.016173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.016272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.016300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.016392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.016481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.016506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.016613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.016720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.016747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.016843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.016930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.016955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.017053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.017152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.017178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.017271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.017385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.017412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.017511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.017611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.017647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.017756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.017850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.017876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.017974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.018078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.018103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.018197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.018289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.018314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.018411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.018498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.018522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.018616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.018734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.018761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.018856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.018951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.018976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.019081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.019171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.019196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.019301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.019397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.019422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.019515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.019605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.019637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.692 qpair failed and we were unable to recover it. 00:20:40.692 [2024-04-26 14:25:22.019727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.692 [2024-04-26 14:25:22.019816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.019841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.019939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.020028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.020053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.020161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.020256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.020283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.020382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.020479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.020505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.020605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.020708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.020734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.020828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.020924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.020951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.021047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.021133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.021158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.021269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.021378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.021407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.021508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.021602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.021628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.021765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.021916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.021969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.022099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.022231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.022284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.022441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.022558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.022582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.022679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.022798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.022839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.022978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.023129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.023197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.023303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.023423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.023490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.023621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.023770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.023811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.023932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.024086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.024140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.024254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.024399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.024454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.024559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.024652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.024677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.024778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.024874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.024899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.024990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.025080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.025105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.025234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.025407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.025461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.025578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.025671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.025699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.025796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.025953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.026010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.026106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.026201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.026227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.026325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.026423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.026448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.026545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.026660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.026687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.026806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.026939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.026964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.027056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.027158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.027194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.693 qpair failed and we were unable to recover it. 00:20:40.693 [2024-04-26 14:25:22.027326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.693 [2024-04-26 14:25:22.027459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.027484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.027623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.027747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.027771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.027869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.028001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.028057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.028158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.028255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.028282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.028378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.028471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.028495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.028668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.028770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.028795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.028887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.028974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.028999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.029090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.029223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.029280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.029390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.029499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.029523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.029612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.029745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.029794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.029895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.030010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.030070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.030161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.030248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.030273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.030401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.030533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.030560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.030646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.030741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.030766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.030864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.030955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.030980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.031103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.031220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.031247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.031339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.031444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.031469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.031565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.031674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.031735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.031831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.031941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.031966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.032088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.032209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.032235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.032328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.032414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.032438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.032535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.032623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.032653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.032763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.032850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.032879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.032990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.033102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.033126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.033279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.033433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.033480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.033569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.033663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.033688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.033787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.033905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.033953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.694 qpair failed and we were unable to recover it. 00:20:40.694 [2024-04-26 14:25:22.034063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.034209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.694 [2024-04-26 14:25:22.034234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.034324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.034422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.034447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.034551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.034652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.034679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.034797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.034921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.034976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.035068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.035159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.035184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.035288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.035415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.035467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.035570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.035674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.035700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.035797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.035886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.035912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.035999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.036088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.036113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.036204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.036295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.036320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.036418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.036503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.036527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.036636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.036742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.036767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.036866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.037034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.037058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.037183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.037301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.037325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.037434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.037517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.037541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.037638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.037735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.037759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.037860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.037997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.038054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.038160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.038296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.038342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.038435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.038526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.038552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.038670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.038758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.038783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.038918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.039040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.039084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.039193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.039317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.039342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.039441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.039536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.039563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.039668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.039780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.039838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.039933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.040025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.040050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.040163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.040268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.040294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.040416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.040540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.695 [2024-04-26 14:25:22.040566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.695 qpair failed and we were unable to recover it. 00:20:40.695 [2024-04-26 14:25:22.040665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.040755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.040779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.040873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.040991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.041058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.041179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.041320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.041378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.041469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.041572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.041598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.041712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.041856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.041938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.042047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.042215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.042265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.042358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.042465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.042524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.042640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.042775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.042821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.042916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.043048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.043101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.043222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.043346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.043378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.043492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.043614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.043682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.043800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.043940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.043984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.044104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.044228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.044263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.044405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.044518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.044545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.044655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.044775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.044800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.044899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.045017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.045108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.045214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.045304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.045329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.045424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.045523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.045547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.045641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.045753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.045779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.045891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.045989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.046021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.046130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.046227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.046256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.046362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.046451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.046477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.046593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.046724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.046805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.046937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.047111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.047138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.047241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.047366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.047408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.047529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.047664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.047693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.047793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.047956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.048007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.048129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.048274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.048323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.048439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.048555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.696 [2024-04-26 14:25:22.048582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.696 qpair failed and we were unable to recover it. 00:20:40.696 [2024-04-26 14:25:22.048696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.048802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.048844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.048971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.049106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.049148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.049263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.049393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.049425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.049547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.049668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.049711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.049817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.049917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.049944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.050057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.050200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.050265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.050405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.050516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.050541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.050653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.050764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.050805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.050903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.051023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.051054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.051177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.051297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.051322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.051444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.051552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.051577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.051687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.051781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.051808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.051927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.052036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.052061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.052190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.052371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.052396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.052504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.052652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.052681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.052776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.052897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.052950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.053070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.053227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.053271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.053376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.053525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.053565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.053658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.053771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.053813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.053918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.054052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.054097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.054189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.054282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.054308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.054432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.054549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.054576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.054680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.054794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.054848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.054977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.055102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.055149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.055273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.055384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.055409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.055510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.055608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.055638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.055729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.055825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.055853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.055944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.056038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.056063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.056158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.056249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.056275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.697 qpair failed and we were unable to recover it. 00:20:40.697 [2024-04-26 14:25:22.056392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.697 [2024-04-26 14:25:22.056487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.056512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.056614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.056744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.056810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.056957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.057081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.057106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.057228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.057339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.057366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.057464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.057564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.057591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.057717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.057864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.057919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.058015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.058110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.058136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.058257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.058404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.058443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.058566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.058659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.058686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.058794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.058940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.058993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.059097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.059248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.059302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.059419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.059555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.059580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.059686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.059833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.059862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.059962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.060067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.060128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.060263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.060401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.060451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.060560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.060649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.060690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.060818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.060950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.060988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.061122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.061283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.061331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.061445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.061579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.061604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.061714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.061900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.061955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.062069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.062234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.062292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.062417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.062563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.062650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.062756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.062878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.062924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.063050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.063226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.063273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.063410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.063533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.063559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.063669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.063764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.063797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.063911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.064069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.064109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.064223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.064316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.064341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.064459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.064608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.064678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.698 qpair failed and we were unable to recover it. 00:20:40.698 [2024-04-26 14:25:22.064780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.698 [2024-04-26 14:25:22.064903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.064929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.065057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.065164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.065189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.065316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.065454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.065494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.065612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.065735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.065762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.065879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.065985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.066026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.066126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.066258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.066285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.066397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.066498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.066524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.066635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.066741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.066780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.066892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.067029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.067110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.067224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.067361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.067390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.067503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.067613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.067659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.067790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.067903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.067929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.068072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.068202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.068228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.068353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.068469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.068496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.068617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.068753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.068788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.068918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.069054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.069100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.069197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.069299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.069325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.069446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.069547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.069574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.069675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.069767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.069792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.069902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.070020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.070045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.070162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.070269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.070296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.070410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.070517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.070544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.070667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.070799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.070826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.071006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.071141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.071197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.699 [2024-04-26 14:25:22.071308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.071445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.699 [2024-04-26 14:25:22.071497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.699 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.071709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.071814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.071840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.071944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.072057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.072086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.072212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.072349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.072376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.072532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.072649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.072693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.072826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.072989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.073042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.073220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.073390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.073452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.073549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.073658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.073685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.073782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.073906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.073945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.074064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.074166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.074191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.074314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.074438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.074475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.074616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.074829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.074854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.074963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.075080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.075119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.075261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.075408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.075458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.075561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.075764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.075792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.075931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.076055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.076085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.076218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.076339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.076364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.076498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.076637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.076683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.076818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.076956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.076999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.077121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.077249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.077273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.077393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.077501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.077533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.077661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.077786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.077814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.077944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.078074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.078102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.078218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.078340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.078369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.078498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.078610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.078670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.078795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.078951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.078997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.079111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.079227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.079254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.079391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.079509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.079534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.079649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.079804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.079832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.700 [2024-04-26 14:25:22.079958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.080069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.700 [2024-04-26 14:25:22.080094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.700 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.080221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.080331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.080358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.080458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.080568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.080593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.080699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.080805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.080844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.080942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.081057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.081096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.081221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.081328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.081352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.081479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.081603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.081629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.081733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.081844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.081902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.082024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.082150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.082175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.082280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.082374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.082400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.082510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.082623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.082682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.082784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.082887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.082914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.083052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.083172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.083214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.083329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.083464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.083508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.083625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.083798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.083844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.083955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.084076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.084101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.084246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.084379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.084446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.084555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.084655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.084681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.084789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.084908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.084951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.085099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.085210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.085236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.085350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.085482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.085532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.085656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.085828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.085879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.085983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.086086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.086131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.086245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.086344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.086370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.086514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.086623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.086698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.086815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.086969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.087009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.087110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.087217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.087261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.087375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.087501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.087527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.087669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.087833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.087880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.088005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.088118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.701 [2024-04-26 14:25:22.088142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.701 qpair failed and we were unable to recover it. 00:20:40.701 [2024-04-26 14:25:22.088281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.088483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.088509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.088621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.088758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.088791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.088904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.089029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.089055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.089184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.089329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.089380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.089500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.089652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.089678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.089803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.089952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.090000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.090110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.090353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.090378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.090518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.090664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.090717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.090827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.090941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.091004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.091145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.091298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.091356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.091497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.091709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.091736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.091850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.091966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.091994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.092106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.092200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.092231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.092334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.092458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.092519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.092635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.092740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.092765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.092860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.092971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.093004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.093143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.093295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.093321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.093427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.093522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.093548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.093654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.093812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.093862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.093952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.094112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.094139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.094255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.094348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.094375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.094493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.094595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.094621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.094755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.094894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.094952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.095057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.095164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.095189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.095308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.095440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.095483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.095601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.095724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.095750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.095881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.096015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.096059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.096175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.096301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.096346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.096481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.096617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.096683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.702 qpair failed and we were unable to recover it. 00:20:40.702 [2024-04-26 14:25:22.096812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.702 [2024-04-26 14:25:22.096952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.097002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.097124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.097259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.097304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.097416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.097556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.097584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.097685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.097791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.097818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.097931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.098048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.098090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.098198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.098346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.098390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.098489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.098605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.098656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.098765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.098871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.098922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.099053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.099184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.099228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.099374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.099480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.099505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.099623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.099747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.099774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.099897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.100026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.100082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.100208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.100380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.100423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.100523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.100676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.100703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.100815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.100929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.100963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.101085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.101248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.101296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.101425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.101568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.101655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.101783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.101932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.101979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.102159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.102307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.102345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.102504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.102669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.102726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.102858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.102997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.103054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.103181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.103338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.103399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.103535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.103710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.103738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.103849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.103949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.103977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.104086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.104192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.104221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.104321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.104424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.104453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.104579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.104697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.104731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.104877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.105006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.703 [2024-04-26 14:25:22.105035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.703 qpair failed and we were unable to recover it. 00:20:40.703 [2024-04-26 14:25:22.105215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.105410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.105441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.105645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.105746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.105775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.105931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.106068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.106117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.106279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.106429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.106458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.106593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.106757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.106784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.106902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.107086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.107145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.107271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.107436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.107496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.107607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.107744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.107773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.107896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.108001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.108034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.108127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.108223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.108250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.108359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.108482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.108531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.108648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.108788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.108848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.108980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.109104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.109135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.109232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.109333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.109360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.109469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.109572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.109599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.109726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.109854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.109889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.110026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.110183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.110236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.110348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.110469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.110498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.110629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.110757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.110790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.110909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.111031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.111082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.111212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.111390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.111420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.111527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.111651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.111686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.111822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.111965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.112018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.112139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.112289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.112341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.704 qpair failed and we were unable to recover it. 00:20:40.704 [2024-04-26 14:25:22.112481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.704 [2024-04-26 14:25:22.112616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.112659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.112789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.112938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.112970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.113094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.113196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.113227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.113352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.113451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.113477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.113599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.113770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.113820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.113974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.114136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.114189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.114303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.114410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.114442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.114555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.114694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.114752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.114862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.114984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.115045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.115191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.115326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.115382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.115488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.115628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.115666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.115795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.115988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.116043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.116165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.116332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.116394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.116524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.116621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.116656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.116807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.116928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.116955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.117089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.117244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.117299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.117433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.117552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.117579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.117711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.117904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.117953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.118057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.118216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.118244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.118352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.118458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.118489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.118614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.118789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.118820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.118954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.119071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.119097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.119238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.119391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.119446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.119560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.119684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.119741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.119880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.120023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.120082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.120238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.120365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.120427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.120612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.120741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.120767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.120866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.120993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.121033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.121175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.121292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.121318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.705 qpair failed and we were unable to recover it. 00:20:40.705 [2024-04-26 14:25:22.121450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.705 [2024-04-26 14:25:22.121612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.121667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.121800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.121976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.122018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.122156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.122297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.122351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.122464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.122599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.122657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.122794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.122912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.122941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.123073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.123226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.123252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.123380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.123522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.123579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.123707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.123814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.123858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.123972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.124132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.124195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.124325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.124453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.124512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.124646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.124813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.124858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.124964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.125107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.125157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.125330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.125477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.125520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.125653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.125809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.125867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.125992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.126153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.126201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.126317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.126467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.126518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.126625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.126762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.126816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.126953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.127104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.127146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.127250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.127362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.127412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.127502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.127621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.127674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.127805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.127947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.127996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.128131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.128283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.128331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.128441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.128565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.128591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.128715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.128864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.128909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.129096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.129220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.129265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.129396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.129550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.129576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.129697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.129842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.129902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.130006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.130106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.130142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.130260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.130390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.130417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.706 qpair failed and we were unable to recover it. 00:20:40.706 [2024-04-26 14:25:22.130521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.130645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.706 [2024-04-26 14:25:22.130699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.130822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.130937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.130988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.131169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.131388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.131434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.131561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.131725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.131751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.131880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.132019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.132054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.132166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.132318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.132348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.132479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.132597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.132622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.132753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.132893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.132949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.133083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.133218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.133244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.133363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.133495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.133542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.133663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.133806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.133859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.133974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.134149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.134197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.134301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.134463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.134517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.134640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.134790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.134841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.134961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.135102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.135142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.135295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.135438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.135492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.135661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.135843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.135884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.136013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.136129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.136155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.136251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.136370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.136413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.136512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.136641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.136688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.136867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.137026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.137072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.137185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.137359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.137411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.137513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.137617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.137672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.137783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.137914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.137967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.138145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.138262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.138287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.138418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.138542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.138574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.138674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.138836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.138882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.139032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.139153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.139179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.139299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.139445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.139489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.707 [2024-04-26 14:25:22.139602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.139737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.707 [2024-04-26 14:25:22.139762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.707 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.139875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.140029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.140082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.140256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.140413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.140463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.140590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.140715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.140766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.140884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.141034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.141076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.141201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.141366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.141414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.141552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.141682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.141708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.141813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.141972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.142017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.142193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.142398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.142445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.142588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.142745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.142771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.142903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.143053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.143091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.143234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.143385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.143426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.143618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.143747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.143773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.143916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.144035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.144083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.144217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.144411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.144436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.144551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.144660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.144687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.144823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.144973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.145028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.145194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.145348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.145397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.145524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.145672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.145716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.145815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.145925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.145949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.146067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.146210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.146264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.146404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.146543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.146568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.146668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.146782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.146818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.146931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.147071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.147127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.147273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.147419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.147464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.708 [2024-04-26 14:25:22.147571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.147677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.708 [2024-04-26 14:25:22.147703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.708 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.147801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.147928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.147981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.148161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.148343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.148398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.148545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.148674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.148707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.148871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.149056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.149107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.149275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.149390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.149421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.149546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.149700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.149729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.149864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.150072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.150119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.150252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.150380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.150413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.150572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.150734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.150777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.150940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.151083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.151110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.151237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.151342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.151376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.151495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.151602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.151644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.151766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.151908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.151938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.152043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.152146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.152180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.152326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.152462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.152490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.152678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.152847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.152900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.153062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.153251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.153282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.153444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.153596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.153624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.153805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.154015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.154047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.154182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.154321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.154374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.154494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.154687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.154751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.154918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.155098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.155147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.155294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.155503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.155535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.155665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.155803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.155867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.155991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.156102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.156153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.156300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.156485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.156516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.156676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.156785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.156821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.156954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.157126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.157183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.709 qpair failed and we were unable to recover it. 00:20:40.709 [2024-04-26 14:25:22.157311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.157462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.709 [2024-04-26 14:25:22.157507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.157678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.157811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.157844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.157964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.158063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.158093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.158280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.158452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.158503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.158733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.158869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.158919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.159085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.159209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.159240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.159429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.159622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.159676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.159816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.159974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.160017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.160128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.160309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.160340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.160486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.160693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.160722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.160856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.161001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.161053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.161171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.161308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.161365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.161479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.161639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.161668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.161823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.161984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.162034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.162183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.162365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.162417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.162523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.162627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.162662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.162799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.162982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.163028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.163162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.163326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.163378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.163515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.163694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.163726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.163873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.164027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.164076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.164215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.164357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.164415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.164565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.164701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.164764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.164900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.165078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.165130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.165269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.165466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.165502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.165678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.165826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.165878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.166007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.166180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.166223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.166360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.166506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.166559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.166664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.166826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.166870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.167009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.167199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.167243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.710 qpair failed and we were unable to recover it. 00:20:40.710 [2024-04-26 14:25:22.167403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.710 [2024-04-26 14:25:22.167527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.167556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.167684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.167834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.167892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.168007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.168170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.168225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.168362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.168522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.168574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.168722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.168885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.168941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.169079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.169221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.169250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.169375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.169499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.169529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.169668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.169803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.169850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.170002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.170160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.170212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.170363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.170502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.170529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.170662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.170832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.170860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.171022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.171194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.171244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.171380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.171522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.171549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.171651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.171775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.171809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.171946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.172090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.172149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.172271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.172423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.172471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.172572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.172692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.172747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.172882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.173010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.173038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.173172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.173299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.173334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.173454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.173557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.173588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.173719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.173924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.173972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.174170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.174325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.174379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.174484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.174614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.174670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.174811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.174982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.175032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.175180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.175308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.175341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.175493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.175708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.175759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.175873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.175997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.176054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.176199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.176325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.176355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.711 [2024-04-26 14:25:22.176461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.176593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.711 [2024-04-26 14:25:22.176650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.711 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.176776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.176923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.176979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.177150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.177309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.177359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.177485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.177683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.177714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.177856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.178009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.178063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.178230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.178426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.178471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.178601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.178856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.178905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.179058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.179213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.179257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.179369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.179462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.179493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.179668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.179795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.179822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.179981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.180128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.180175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.180339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.180481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.180508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.180695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.180803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.180834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.181006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.181144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.181197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.181315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.181437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.181487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.181601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.181750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.181777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.181903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.182057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.182107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.182219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.182333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.182366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.182475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.182593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.182623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.182779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.182979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.183030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.183169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.183329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.183379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.183488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.183686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.183719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.183825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.183990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.184040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.184215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.184361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.184411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.184542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.184691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.184745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.184920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.185070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.185120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.185241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.185399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.185448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.185577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.185702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.185757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.185890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.186032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.186084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.186239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.186372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.186427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.186577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.186738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.186787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.186894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.187052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.187102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.712 qpair failed and we were unable to recover it. 00:20:40.712 [2024-04-26 14:25:22.187230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.712 [2024-04-26 14:25:22.187383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.187433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.187573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.187719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.187774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.187892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.188039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.188091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.188259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.188401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.188450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.188601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.188756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.188810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.188941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.189109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.189138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.189275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.189398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.189424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.189535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.189643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.189671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.189830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.189953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.189985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.190119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.190293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.190349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.190489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.190616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.190680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.190845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.191011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.191064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.191168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.191304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.191361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.191484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.191655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.191700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.191831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.191980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.192025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.192155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.192289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.192343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.192450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.192582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.192609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.192753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.192912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.192967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.193082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.193220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.193270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.193409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.193533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.193564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.193720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.193848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.193880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.194009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.194162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.194214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.194367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.194501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.194554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.194663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.194798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.194844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.194980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.195134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.195186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.195304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.195426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.195478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.195591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.195743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.195791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.713 qpair failed and we were unable to recover it. 00:20:40.713 [2024-04-26 14:25:22.195926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.713 [2024-04-26 14:25:22.196074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.196129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.196284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.196408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.196437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.196572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.196718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.196774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.196896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.197025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.197054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.197204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.197349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.197396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.197511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.197657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.197700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.197845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.197990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.198039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.198165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.198287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.198319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.198460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.198591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.198665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.198807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.198944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.198996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.199153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.199310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.199360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.199501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.199652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.199699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.199863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.199991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.200044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.200186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.200352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.200403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.200519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.200649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.200699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.200830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.201033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.201064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.201165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.201276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.201308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.201407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.201532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.201564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.201676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.201791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.201820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.201926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.202043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.202075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.202209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.202350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.202400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.202501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.202598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.202628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.202790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.202927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.202954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.203091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.203211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.203236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.203376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.203511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.203555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.203668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.203782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.203820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.203961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.204087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.204134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.204232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.204350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.204392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.204511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.204703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.204730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.204866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.204989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.205015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.205144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.205255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.205283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.205381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.205500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.205540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.714 qpair failed and we were unable to recover it. 00:20:40.714 [2024-04-26 14:25:22.205654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.714 [2024-04-26 14:25:22.205783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.205829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.205947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.206105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.206148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.206264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.206400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.206435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.206551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.206656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.206683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.206799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.206948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.206991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.207121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.207247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.207287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.207429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.207540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.207572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.207687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.207808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.207857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.207997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.208142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.208182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.208308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.208432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.208468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.208600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.208726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.208772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.208892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.209012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.209039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.209165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.209313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.209338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.209452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.209569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.209594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.209713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.209838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.209865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.209984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.210128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.210171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.210275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.210398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.210442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.210554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.210645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.210673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.210786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.210945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.210994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.211135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.211272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.211321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.211438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.211558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.211584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.211717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.211872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.211919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.212045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.212196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.212244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.212355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.212505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.212547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.212651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.212754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.212779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.212872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.212965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.212992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.213132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.213252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.213279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.213391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.213499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.213526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.213656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.213814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.213852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.214003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.214158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.214206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.214322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.214469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.214515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.214617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.214756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.214807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.214929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.215067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.215114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.215253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.215411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.215458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.715 qpair failed and we were unable to recover it. 00:20:40.715 [2024-04-26 14:25:22.215575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.715 [2024-04-26 14:25:22.215670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.215695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.215788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.215903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.215952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.216067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.216229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.216273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.216414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.216530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.216555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.216657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.216780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.216830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.216940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.217049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.217096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.217229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.217367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.217414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.217517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.217649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.217695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.217814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.217949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.217984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.218086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.218206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.218246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.218361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.218497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.218542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.218664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.218801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.218845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.218973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.219112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.219157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.219259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.219389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.219437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.219537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.219674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.219715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.219846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.219997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.220038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.220136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.220255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.220301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.220430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.220564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.220590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.220727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.220874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.220920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.221020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.221117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.221144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.221256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.221402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.221455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.221566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.221670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.221697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.221822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.221970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.222017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.222124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.222254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.222286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.222404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.222521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.222548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.222668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.222779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.222805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.222912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.223034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.223088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.223244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.223367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.223415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.223523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.223625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.223684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.223844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.223975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.224027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.224159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.224315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.224366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.224489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.224627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.224691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.224803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.224933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.224957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.716 [2024-04-26 14:25:22.225055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.225178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.716 [2024-04-26 14:25:22.225224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.716 qpair failed and we were unable to recover it. 00:20:40.717 [2024-04-26 14:25:22.225325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.225435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.225463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.717 qpair failed and we were unable to recover it. 00:20:40.717 [2024-04-26 14:25:22.225555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.225665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.225691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.717 qpair failed and we were unable to recover it. 00:20:40.717 [2024-04-26 14:25:22.225790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.225885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.225909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.717 qpair failed and we were unable to recover it. 00:20:40.717 [2024-04-26 14:25:22.226037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.226174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.226223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.717 qpair failed and we were unable to recover it. 00:20:40.717 [2024-04-26 14:25:22.226343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.226466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.226514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.717 qpair failed and we were unable to recover it. 00:20:40.717 [2024-04-26 14:25:22.226654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.226793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.226839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.717 qpair failed and we were unable to recover it. 00:20:40.717 [2024-04-26 14:25:22.226974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.227109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.227134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.717 qpair failed and we were unable to recover it. 00:20:40.717 [2024-04-26 14:25:22.227251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.227383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.227431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.717 qpair failed and we were unable to recover it. 00:20:40.717 [2024-04-26 14:25:22.227532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.227683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.227710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.717 qpair failed and we were unable to recover it. 00:20:40.717 [2024-04-26 14:25:22.227832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.227990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.228036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.717 qpair failed and we were unable to recover it. 00:20:40.717 [2024-04-26 14:25:22.228174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.228315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.228353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.717 qpair failed and we were unable to recover it. 00:20:40.717 [2024-04-26 14:25:22.228483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.228618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.228675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.717 qpair failed and we were unable to recover it. 00:20:40.717 [2024-04-26 14:25:22.228794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.228932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.228978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.717 qpair failed and we were unable to recover it. 00:20:40.717 [2024-04-26 14:25:22.229103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.229256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.229295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.717 qpair failed and we were unable to recover it. 00:20:40.717 [2024-04-26 14:25:22.229392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.229497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.717 [2024-04-26 14:25:22.229521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.717 qpair failed and we were unable to recover it. 00:20:40.997 [2024-04-26 14:25:22.229671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.229805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.229855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.997 qpair failed and we were unable to recover it. 00:20:40.997 [2024-04-26 14:25:22.229975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.230111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.230148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.997 qpair failed and we were unable to recover it. 00:20:40.997 [2024-04-26 14:25:22.230307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.230447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.230473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.997 qpair failed and we were unable to recover it. 00:20:40.997 [2024-04-26 14:25:22.230573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.230690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.230749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.997 qpair failed and we were unable to recover it. 00:20:40.997 [2024-04-26 14:25:22.230883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.231072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.231126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.997 qpair failed and we were unable to recover it. 00:20:40.997 [2024-04-26 14:25:22.231256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.231402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.231446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.997 qpair failed and we were unable to recover it. 00:20:40.997 [2024-04-26 14:25:22.231540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.231652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.231697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.997 qpair failed and we were unable to recover it. 00:20:40.997 [2024-04-26 14:25:22.231820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.231943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.231968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.997 qpair failed and we were unable to recover it. 00:20:40.997 [2024-04-26 14:25:22.232073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.232162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.232187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.997 qpair failed and we were unable to recover it. 00:20:40.997 [2024-04-26 14:25:22.232319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.232456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.232501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.997 qpair failed and we were unable to recover it. 00:20:40.997 [2024-04-26 14:25:22.232615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.232758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.232805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.997 qpair failed and we were unable to recover it. 00:20:40.997 [2024-04-26 14:25:22.232905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.233021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.233066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.997 qpair failed and we were unable to recover it. 00:20:40.997 [2024-04-26 14:25:22.233190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.233344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.233382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.997 qpair failed and we were unable to recover it. 00:20:40.997 [2024-04-26 14:25:22.233491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.997 [2024-04-26 14:25:22.233697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.233723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.233858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.234000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.234052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.234167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.234320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.234401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.234497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.234619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.234678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.234776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.234904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.234954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.235094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.235295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.235347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.235454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.235545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.235570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.235663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.235752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.235776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.235865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.235989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.236034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.236155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.236268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.236293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.236415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.236534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.236559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.236658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.236791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.236835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.236949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.237094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.237147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.237279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.237396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.237422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.237515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.237647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.237693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.237835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.237977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.238002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.238117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.238270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.238315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.238431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.238558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.238583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.238758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.238891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.238935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.239052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.239189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.239229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.239378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.239516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.239552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.239681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.239828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.239875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.239995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.240111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.240141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.240256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.240400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.240449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.240554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.240657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.240683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.240789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.240904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.240951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.241078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.241220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.241245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.241374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.241491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.241516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.241639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.241789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.241831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.241941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.242065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.242113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.242242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.242383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.998 [2024-04-26 14:25:22.242435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.998 qpair failed and we were unable to recover it. 00:20:40.998 [2024-04-26 14:25:22.242562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.242660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.242688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.242806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.242953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.242980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.243117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.243254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.243305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.243431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.243550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.243574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.243702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.243858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.243905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.244027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.244133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.244184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.244285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.244386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.244412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.244512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.244683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.244714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.244853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.244989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.245032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.245123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.245235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.245280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.245406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.245532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.245559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.245670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.245827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.245882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.246021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.246137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.246162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.246252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.246376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.246429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.246523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.246614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.246644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.246737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.246887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.246938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.247050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.247178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.247224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.247345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.247458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.247484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.247575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.247676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.247703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.247820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.247951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.247994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.248105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.248230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.248292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.248382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.248474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.248501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.248661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.248823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.248879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.248993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.249128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.249175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.249307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.249418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.249444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.249570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.249747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.249803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.249911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.250030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.250076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.250201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.250329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.250374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.250512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.250647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.250695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.250843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.250959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.251025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.251169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.251314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.999 [2024-04-26 14:25:22.251370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:40.999 qpair failed and we were unable to recover it. 00:20:40.999 [2024-04-26 14:25:22.251491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.251694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.251721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.251848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.251968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.251995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.252117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.252247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.252273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.252389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.252497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.252523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.252640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.252752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.252777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.252907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.253054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.253107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.253217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.253360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.253407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.253548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.253655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.253701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.253810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.253950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.253997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.254124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.254303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.254339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.254449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.254565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.254611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.254716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.254809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.254834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.254946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.255118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.255170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.255281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.255414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.255470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.255577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.255708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.255734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.255852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.255965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.255990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.256136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.256301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.256361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.256530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.256667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.256693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.256809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.256946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.256987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.257086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.257201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.257245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.257353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.257463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.257488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.257597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.257767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.257818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.257936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.258051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.258076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.258171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.258279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.258323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.258426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.258516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.258541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.258648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.258794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.258845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.258946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.259069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.259119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.259217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.259337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.000 [2024-04-26 14:25:22.259381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.000 qpair failed and we were unable to recover it. 00:20:41.000 [2024-04-26 14:25:22.259477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.259568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.259593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.259692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.259811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.259865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.259980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.260076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.260101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.260214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.260380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.260441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.260531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.260623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.260657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.260805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.260934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.260969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.261089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.261228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.261272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.261395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.261517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.261543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.261642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.261742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.261770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.261882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.262000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.262025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.262116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.262288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.262313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.262444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.262601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.262660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.262757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.262874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.262919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.263020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.263145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.263226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.263348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.263494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.263545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.263657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.263765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.263790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.263920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.264046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.264092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.264218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.264388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.264435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.264548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.264639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.264666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.264791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.264916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.264941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.265046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.265200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.265264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.265390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.265503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.265529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.265622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.265722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.265748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.265873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.266028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.266053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.266243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.266378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.266403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.001 qpair failed and we were unable to recover it. 00:20:41.001 [2024-04-26 14:25:22.266493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.001 [2024-04-26 14:25:22.266608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.266665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.266768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.266925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.266979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.267106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.267240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.267287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.267404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.267526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.267552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.267654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.267743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.267769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.267897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.268022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.268079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.268201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.268353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.268407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.268519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.268651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.268688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.268815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.268940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.268975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.269120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.269274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.269300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.269396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.269508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.269559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.269670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.269798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.269843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.269964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.270088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.270113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.270222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.270329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.270354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.270465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.270567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.270592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.270716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.270869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.270938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.271069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.271227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.271267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.271382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.271493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.271518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.271610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.271730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.271779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.271901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.272041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.272078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.272221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.272370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.272426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.272544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.272655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.272682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.272806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.272990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.273049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.273142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.273241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.273269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.273361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.273454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.273479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.273586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.273685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.273712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.273832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.273979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.274032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.274162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.274290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.274346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.274442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.274535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.274560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.274653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.274768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.274812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.274944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.275053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.002 [2024-04-26 14:25:22.275079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.002 qpair failed and we were unable to recover it. 00:20:41.002 [2024-04-26 14:25:22.275202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.275400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.275457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.275577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.275686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.275711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.275821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.275958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.276004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.276110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.276225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.276251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.276378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.276534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.276586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.276703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.276858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.276909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.277010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.277119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.277171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.277297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.277423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.277481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.277610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.277795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.277843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.277955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.278105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.278168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.278294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.278418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.278443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.278550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.278667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.278714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.278822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.278938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.278964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.279079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.279190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.279215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.279374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.279506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.279531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.279647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.279793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.279844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.279952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.280104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.280152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.280243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.280351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.280395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.280490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.280623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.280688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.280823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.280944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.281004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.281150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.281260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.281285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.281374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.281463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.281487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.281596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.281746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.281792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.281904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.282010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.282035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.282124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.282219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.282246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.282339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.282441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.282466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.282556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.282655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.003 [2024-04-26 14:25:22.282681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.003 qpair failed and we were unable to recover it. 00:20:41.003 [2024-04-26 14:25:22.282774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.282899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.282980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.283088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.283232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.283287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.283441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.283556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.283581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.283704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.283818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.283842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.283958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.284098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.284142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.284249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.284394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.284440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.284539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.284669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.284715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.284808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.284899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.284924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.285038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.285203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.285260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.285374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.285505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.285530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.285652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.285763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.285788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.285894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.286021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.286102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.286221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.286327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.286351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.286459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.286587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.286611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.286745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.286862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.286887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.287004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.287115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.287140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.287239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.287334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.287359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.287475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.287650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.287677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.287796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.287936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.288005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.288130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.288253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.288301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.288433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.288545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.288570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.288673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.288839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.288887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.289023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.289164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.289198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.289353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.289483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.289568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.289726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.289936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.289966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.290091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.290199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.290246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.290347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.290455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.290500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.290621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.290783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.290822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.290932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.291055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.291117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.291246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.291395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.291448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.291542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.291685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.004 [2024-04-26 14:25:22.291711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.004 qpair failed and we were unable to recover it. 00:20:41.004 [2024-04-26 14:25:22.291810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.291917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.291960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.292087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.292264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.292320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.292471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.292620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.292679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.292798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.292918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.292943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.293063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.293196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.293249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.293361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.293488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.293541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.293667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.293833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.293875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.294006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.294123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.294149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.294264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.294401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.294441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.294542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.294673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.294719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.294814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.294935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.294974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.295089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.295213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.295269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.295368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.295480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.295505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.295642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.295823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.295869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.295988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.296138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.296174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.296341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.296531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.296586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.296728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.296880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.296934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.297069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.297217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.297261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.297386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.297532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.297582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.297684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.297858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.297917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.298048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.298155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.298181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.298298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.298443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.298524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.298687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.298782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.298811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.298980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.299095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.299149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.299275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.299408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.299455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.299576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.299738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.299801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.299924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.300046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.300106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.300252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.300397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.300423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.300519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.300640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.300706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.300867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.301015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.301059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.005 qpair failed and we were unable to recover it. 00:20:41.005 [2024-04-26 14:25:22.301166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.005 [2024-04-26 14:25:22.301298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.301336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.301452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.301550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.301575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.301711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.301888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.301932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.302076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.302211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.302255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.302354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.302479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.302564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.302697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.302853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.302898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.303026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.303167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.303214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.303319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.303449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.303494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.303593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.303697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.303729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.303837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.303932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.303957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.304053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.304156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.304181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.304279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.304413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.304440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.304545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.304654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.304680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.304800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.304962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.304989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.305081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.305172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.305197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.305345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.305463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.305488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.305612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.305811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.305862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.305996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.306125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.306152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.306269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.306403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.306431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.306563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.306702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.306760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.306880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.307025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.307070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.307234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.307376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.307421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.307556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.307685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.307740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.307868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.308034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.308095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.006 qpair failed and we were unable to recover it. 00:20:41.006 [2024-04-26 14:25:22.308214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.006 [2024-04-26 14:25:22.308508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.308554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.308687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.308812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.308837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.308958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.309114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.309159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.309292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.309446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.309497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.309624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.309782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.309862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.309956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.310082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.310127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.310246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.310392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.310432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.310553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.310705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.310766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.310880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.311036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.311122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.311236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.311397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.311454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.311551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.311672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.311738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.311881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.312014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.312039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.312164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.312287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.312314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.312437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.312666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.312714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.312884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.313032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.313112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.313273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.313424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.313482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.313602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.313746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.313810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.313959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.314103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.314149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.314243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.314382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.314443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.314545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.314700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.314754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.314869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.315007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.315073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.315298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.315477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.315540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.315672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.315818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.315843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.315965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.316145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.316194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.316321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.316463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.316529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.316717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.316897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.316949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.317080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.317241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.317292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.317411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.317545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.317579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.317686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.317805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.317852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.318011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.318190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.318215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.318342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.318532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.318589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.007 qpair failed and we were unable to recover it. 00:20:41.007 [2024-04-26 14:25:22.318731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.007 [2024-04-26 14:25:22.318851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.318900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.319009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.319233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.319274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.319368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.319537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.319593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.319711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.319888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.319937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.320061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.320242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.320289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.320414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.320549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.320585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.320724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.320839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.320863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.321017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.321147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.321172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.321309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.321497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.321544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.321711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.321807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.321834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.322021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.322184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.322236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.322365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.322528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.322571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.322665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.322868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.322916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.323011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.323102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.323127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.323275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.323483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.323542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.323699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.323874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.323926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.324122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.324265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.324313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.324442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.324557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.324582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.324761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.324977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.325026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.325143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.325344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.325370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.325468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.325566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.325590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.325793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.325950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.325998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.326116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.326314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.326374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.326506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.326662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.326711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.326832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.326957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.326982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.327150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.327353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.327378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.327542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.327666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.327692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.327796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.327926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.328012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.328139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.328387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.328437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.328570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.328720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.328746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.328916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.329095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.329149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.008 qpair failed and we were unable to recover it. 00:20:41.008 [2024-04-26 14:25:22.329298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.329455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.008 [2024-04-26 14:25:22.329508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.329617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.329736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.329763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.329897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.329991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.330017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.330203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.330415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.330442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.330563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.330680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.330706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.330837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.331068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.331095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.331291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.331460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.331512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.331646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.331791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.331837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.331972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.332072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.332098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.332197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.332324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.332351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.332475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.332614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.332662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.332765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.332866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.332891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.333005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.333126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.333153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.333271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.333418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.333460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.333586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.333687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.333715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.333867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.334032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.334083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.334206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.334317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.334342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.334440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.334539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.334566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.334668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.334886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.334934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.335049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.335226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.335285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.335387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.335476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.335505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.335615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.335778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.335805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.335923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.336069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.336094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.336265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.336414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.336497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.336600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.336735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.336776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.336934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.337060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.337093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.337255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.337500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.337555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.337656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.337755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.337781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.337887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.338037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.338087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.338201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.338327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.338412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.338509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.338695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.338722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.009 [2024-04-26 14:25:22.338916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.339053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.009 [2024-04-26 14:25:22.339107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.009 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.339267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.339410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.339472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.339649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.339826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.339874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.339977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.340150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.340178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.340339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.340478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.340532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.340675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.340836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.340868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.341007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.341231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.341284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.341452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.341584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.341612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.341727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.341868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.341919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.342027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.342187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.342248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.342448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.342568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.342622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.342731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.342887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.342949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.343071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.343280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.343330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.343433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.343558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.343603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.343762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.343951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.344002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.344734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.344837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.344864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.345021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.345142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.345168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.345330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.345491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.345556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.345658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.345793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.345846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.345951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.346048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.346073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.346262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.346380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.346407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.346504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.346615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.346680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.346800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.346931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.346974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.347083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.347229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.347269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.347392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.347518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.347544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.347682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.347794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.347821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.010 [2024-04-26 14:25:22.348004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.348122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.010 [2024-04-26 14:25:22.348148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.010 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.348264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.348426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.348485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.348602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.348743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.348784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.348898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.349027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.349081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.349219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.349373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.349414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.349510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.349610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.349644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.349928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.350061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.350088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.350196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.350324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.350350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.350449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.350563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.350589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.350694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.350790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.350815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.350930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.351066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.351119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.351244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.351365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.351391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.351483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.351577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.351604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.351760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.351876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.351937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.352064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.352195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.352255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.352423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.352544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.352570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.352688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.352843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.352885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.353014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.353132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.353158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.353276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.353383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.353408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.353537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.353649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.353675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.353779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.353866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.353890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.353991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.354118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.354167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.354307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.354465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.354531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.354659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.354763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.354791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.354903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.355023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.355091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.355246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.355361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.355386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.355530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.355692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.355741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.355843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.355995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.356020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.356130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.356265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.356312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.356464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.356617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.356648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.356761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.356877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.356902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.357033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.357205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.357255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.011 qpair failed and we were unable to recover it. 00:20:41.011 [2024-04-26 14:25:22.357385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.011 [2024-04-26 14:25:22.357552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.357577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.357705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.357911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.357959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.358095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.358337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.358386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.358501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.358666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.358696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.358822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.358980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.359034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.359126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.359215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.359240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.359358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.359476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.359535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.359658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.359779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.359825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.359965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.360096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.360144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.360250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.360354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.360393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.360495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.360663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.360689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.360787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.360880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.360906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.361026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.361197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.361238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.361349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.361463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.361488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.361579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.361682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.361734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.361918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.362035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.362060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.362165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.362280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.362305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.362430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.362543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.362569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.362690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.362797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.362823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.362944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.363117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.363143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.363282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.363392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.363418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.363536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.363682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.363746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.363837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.363971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.364020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.364117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.364261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.364340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.364485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.364629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.364663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.364813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.364934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.364984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.365095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.365200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.365225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.365340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.365473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.365536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.365637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.365746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.365796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.365939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.366068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.366122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.366247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.366446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.366486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.012 qpair failed and we were unable to recover it. 00:20:41.012 [2024-04-26 14:25:22.366593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.012 [2024-04-26 14:25:22.366715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.366741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.366858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.366984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.367024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.367144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.367260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.367285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.367410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.367536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.367595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.367717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.367840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.367888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.367993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.368106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.368152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.368277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.368395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.368452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.368600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.368717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.368772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.368879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.368995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.369019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.369150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.369280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.369338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.369453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.369565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.369592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.369693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.369805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.369850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.369943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.370049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.370094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.370242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.370358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.370402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.370508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.370680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.370708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.370823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.370970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.371014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.371135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.371298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.371340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.371466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.371587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.371656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.371762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.371891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.371948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.372064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.372221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.372272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.372377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.372489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.372533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.372679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.372804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.372829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.372990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.373128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.373183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.373294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.373422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.373481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.373622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.373748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.373785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.373915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.374162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.374212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.374310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.374431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.374476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.374574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.374702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.374732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.374860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.375011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.375055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.375176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.375312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.375361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.375490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.375621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.375712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.013 [2024-04-26 14:25:22.375846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.375982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.013 [2024-04-26 14:25:22.376025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.013 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.376134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.376289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.376330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.376446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.376548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.376573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.376677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.376824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.376889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.377023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.377177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.377218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.377323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.377440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.377465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.377562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.377665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.377692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.377784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.377881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.377906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.378032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.378153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.378181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.378318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.378452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.378510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.378654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.378788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.378833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.378954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.379073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.379127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.379245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.379366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.379406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.379496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.379639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.379666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.379757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.379848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.379874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.379967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.380080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.380133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.380239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.380375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.380420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.380516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.380658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.380698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.380788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.380908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.380953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.381054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.381155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.381184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.381370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.381481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.381507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.381627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.381765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.381790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.381907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.382102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.382150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.382244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.382421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.382461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.382554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.382655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.382681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.382773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.382880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.382920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.383036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.383174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.383229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.383353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.383466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.383491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.014 qpair failed and we were unable to recover it. 00:20:41.014 [2024-04-26 14:25:22.383592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.383754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.014 [2024-04-26 14:25:22.383805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.383986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.384128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.384182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.384275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.384365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.384390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.384485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.384598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.384650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.384763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.384881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.384907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.385023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.385153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.385200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.385338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.385460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.385488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.385617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.385746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.385773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.385893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.386020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.386078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.386188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.386311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.386335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.386444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.386561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.386585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.386700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.386846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.386888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.386994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.387123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.387171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.387273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.387391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.387419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.387534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.387714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.387740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.387857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.387986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.388032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.388151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.388275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.388300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.388409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.388518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.388543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.388647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.388783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.388812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.388945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.389076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.389102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.389193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.389280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.389305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.389421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.389537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.389569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.389671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.389787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.389849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.389964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.390114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.390165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.390286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.390404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.390463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.390563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.390662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.390687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.390802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.390930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.390974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.391098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.391296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.391346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.391455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.391563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.391588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.391707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.391848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.391899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.015 qpair failed and we were unable to recover it. 00:20:41.015 [2024-04-26 14:25:22.392007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.015 [2024-04-26 14:25:22.392120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.392145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.392326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.392466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.392512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.392664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.392776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.392801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.392939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.393033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.393058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.393172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.393296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.393324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.393448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.393560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.393585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.393699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.393811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.393836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.393929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.394042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.394090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.394199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.394366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.394406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.394497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.394588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.394613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.394725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.394884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.394913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.395038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.395153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.395180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.395295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.395421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.395483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.395610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.395797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.395837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.395954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.396087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.396144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.396236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.396340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.396401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.396495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.396593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.396620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.396765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.396908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.396977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.397161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.397355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.397405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.397508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.397622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.397686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.397836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.397968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.398021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.398129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.398246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.398272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.398377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.398496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.398559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.398751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.398892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.398949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.399098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.399233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.399288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.399396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.399551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.399598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.399713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.399917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.399968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.400083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.400224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.400270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.400416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.400572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.400626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.400753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.400866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.400892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.400992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.401104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.401145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.016 [2024-04-26 14:25:22.401291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.401385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.016 [2024-04-26 14:25:22.401411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.016 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.401529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.401706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.401732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.401855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.401993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.402046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.402162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.402306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.402348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.402468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.402663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.402707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.402816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.402933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.402959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.403098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.403245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.403302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.403411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.403522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.403547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.403668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.403813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.403853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.403972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.404102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.404129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.404225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.404314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.404339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.404448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.404575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.404601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.404718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.404879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.404917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.405035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.405179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.405234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.405336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.405506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.405556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.405660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.405843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.405890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.405998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.406140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.406185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.406295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.406426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.406486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.406605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.406820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.406869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.406993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.407149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.407204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.407344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.407503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.407533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.407648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.407772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.407828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.408006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.408144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.408201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.408342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.408529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.408581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.408679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.408775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.408803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.408920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.409060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.409116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.409233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.409362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.409409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.409560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.409725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.409753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.409863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.409980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.410005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.410102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.410198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.410225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.410321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.410414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.410441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.017 [2024-04-26 14:25:22.410576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.410688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.017 [2024-04-26 14:25:22.410724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.017 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.410853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.410963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.410989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.411110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.411255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.411308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.411499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.411644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.411686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.411867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.411984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.412009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.412146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.412272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.412328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.412439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.412557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.412584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.412709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.412894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.412963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.413098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.413248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.413278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.413404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.413548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.413599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.413717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.413848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.413915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.414030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.414243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.414291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.414412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.414528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.414555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.414664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.414757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.414782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.414872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.414961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.414986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.415087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.415208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.415234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.415328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.415466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.415496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.415625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.415824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.415867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.416012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.416129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.416156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.416302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.416390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.416416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.416539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.416658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.416713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.416838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.416962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.416993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.417123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.417260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.417316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.417437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.417589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.417614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.417732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.417891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.417944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.418053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.418209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.418263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.418356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.418441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.418466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.418612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.418732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.418759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.418868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.418986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.419011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.419109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.419209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.419236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.419337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.419458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.419525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.419652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.419750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.018 [2024-04-26 14:25:22.419776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.018 qpair failed and we were unable to recover it. 00:20:41.018 [2024-04-26 14:25:22.419920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.420122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.420151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.420281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.420456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.420498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.420604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.420734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.420760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.420947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.421043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.421068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.421184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.421334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.421386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.421548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.421659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.421686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.421796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.421926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.421984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.422093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.422257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.422310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.422435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.422562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.422588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.422743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.422912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.422964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.423061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.423158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.423184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.423306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.423498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.423548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.423644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.423761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.423829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.423985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.424097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.424124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.424244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.424354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.424380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.424492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.424619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.424658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.424802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.424995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.425035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.425171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.425281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.425323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.425443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.425564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.425589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.425726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.425928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.425958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.426084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.426214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.426269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.426363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.426447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.426472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.426599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.426722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.426785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.426882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.427021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.427050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.427174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.427316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.427384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.427524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.427649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.427694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.019 [2024-04-26 14:25:22.427806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.427971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.019 [2024-04-26 14:25:22.428018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.019 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.428127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.428303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.428367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.428524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.428689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.428717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.428834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.428945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.428970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.429115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.429251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.429278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.429444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.429556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.429581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.429678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.429778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.429804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.429912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.430044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.430089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.430184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.430276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.430303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.430396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.430486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.430512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.430651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.430801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.430881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.431027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.431182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.431241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.431370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.431499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.431547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.431647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.431765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.431824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.431948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.432108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.432156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.432279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.432413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.432459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.432551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.432672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.432702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.432819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.432924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.432986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.433155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.433281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.433329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.433501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.433658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.433701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.433798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.433919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.433989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.434117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.434246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.434305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.434470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.434621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.434710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.434885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.435035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.435080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.435174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.435287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.435349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.435505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.435649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.435695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.435835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.436069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.436120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.436232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.436344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.436369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.436461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.436548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.436573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.436687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.436803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.436860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.436974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.437110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.437154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.437278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.437383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.437409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.020 qpair failed and we were unable to recover it. 00:20:41.020 [2024-04-26 14:25:22.437518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.020 [2024-04-26 14:25:22.437656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.437703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.437809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.437985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.438043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.438142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.438249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.438296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.438388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.438478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.438503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.438604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.438722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.438749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.438843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.438931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.438956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.439068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.439214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.439267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.439393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.439509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.439535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.439624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.439725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.439750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.439869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.439995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.440038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.440169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.440362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.440414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.440517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.440603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.440628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.440785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.440957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.441002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.441114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.441265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.441316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.441404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.441491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.441516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.441628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.441783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.441836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.441956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.442071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.442096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.442212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.442335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.442360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.442466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.442595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.442653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.442772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.442910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.442991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.443081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.443246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.443294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.443389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.443477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.443502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.443693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.443824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.443872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.443961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.444118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.444143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.444244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.444342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.444367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.444462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.444550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.444575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.444697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.444844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.444911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.445044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.445177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.445222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.445332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.445450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.445475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.445580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.445691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.445737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.445852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.445985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.446032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.021 [2024-04-26 14:25:22.446187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.446282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.021 [2024-04-26 14:25:22.446307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.021 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.446427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.446549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.446575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.446666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.446793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.446832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.446947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.447081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.447137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.447277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.447398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.447445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.447532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.447641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.447667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.447793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.447933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.448002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.448155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.448300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.448344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.448437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.448543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.448567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.448655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.448769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.448822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.448915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.449024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.449079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.449199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.449409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.449462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.449577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.449694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.449740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.449864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.449977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.450001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.450117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.450288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.450343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.450453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.450584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.450609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.450744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.450896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.450959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.451094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.451226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.451285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.451402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.451547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.451636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.451767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.451887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.451912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.452053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.452184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.452235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.452350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.452486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.452549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.452660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.452758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.452785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.452901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.453058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.453109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.453221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.453373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.453425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.453515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.453620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.453674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.453811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.453952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.454015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.454204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.454309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.454354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.454446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.454537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.454561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.454653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.454766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.454810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.454979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.455082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.455130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.022 qpair failed and we were unable to recover it. 00:20:41.022 [2024-04-26 14:25:22.455223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.022 [2024-04-26 14:25:22.455330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.455386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.455482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.455589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.455613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.455794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.455899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.455947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.456067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.456237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.456292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.456381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.456553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.456578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.456673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.456809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.456865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.456982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.457130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.457175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.457329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.457472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.457527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.457658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.457810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.457853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.457966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.458099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.458145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.458261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.458392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.458436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.458544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.458655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.458701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.458826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.459048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.459073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.459195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.459320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.459378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.459489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.459693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.459741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.459860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.460002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.460041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.460187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.460309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.460353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.460447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.460546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.460571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.460710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.460835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.460861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.460961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.461069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.461116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.461229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.461345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.461372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.461470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.461568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.461593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.461712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.461856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.461896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.462000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.462107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.462154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.462246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.462357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.462409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.462502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.462623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.462687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.462798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.462985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.463010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.023 qpair failed and we were unable to recover it. 00:20:41.023 [2024-04-26 14:25:22.463107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.023 [2024-04-26 14:25:22.463200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.463226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.463342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.463449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.463474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.463579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.463685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.463712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.463806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.463892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.463917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.464071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.464196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.464222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.464329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.464489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.464570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.464675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.464802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.464855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.464972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.465119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.465160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.465267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.465400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.465450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.465550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.465668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.465717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.465856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.465980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.466006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.466114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.466224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.466249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.466389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.466504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.466529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.466620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.466718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.466745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.466881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.467038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.467094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.467215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.467370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.467421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.467508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.467669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.467697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.467809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.467912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.467961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.468084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.468221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.468278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.468387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.468509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.468534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.468642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.468757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.468799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.468924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.469040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.469088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.469241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.469376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.469402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.469501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.469621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.469682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.469817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.469934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.469959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.470088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.470255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.470316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.470434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.470539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.470563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.470674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.470775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.470801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.024 qpair failed and we were unable to recover it. 00:20:41.024 [2024-04-26 14:25:22.470891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.024 [2024-04-26 14:25:22.470986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.471011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.471134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.471271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.471317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.471453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.471572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.471597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.471732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.471859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.471916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.472023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.472119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.472145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.472243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.472339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.472365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.472463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.472556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.472583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.472719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.472852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.472897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.472996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.473113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.473159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.473274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.473417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.473458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.473561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.473650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.473675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.473792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.473931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.473985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.474077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.474182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.474241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.474358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.474474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.474499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.474592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.474694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.474722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.474871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.474982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.475041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.475144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.475258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.475305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.475440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.475567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.475594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.475702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.475818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.475843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.475969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.476113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.476158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.476278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.476483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.476536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.476664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.476812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.476858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.476954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.477071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.477118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.477244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.477422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.477477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.477601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.477731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.477777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.477903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.478047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.478101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.478202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.478308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.478357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.478463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.478566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.478592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.478747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.478929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.478980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.479078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.479176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.479202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.479319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.479534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.479586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.025 qpair failed and we were unable to recover it. 00:20:41.025 [2024-04-26 14:25:22.479697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.025 [2024-04-26 14:25:22.479878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.479927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.480030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.480148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.480192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.480370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.480585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.480654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.480786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.480902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.480927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.481020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.481133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.481178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.481287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.481441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.481498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.481664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.481807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.481889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.482008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.482200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.482248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.482369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.482502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.482545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.482668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.482798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.482847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.482938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.483051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.483095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.483211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.483338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.483399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.483522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.483617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.483656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.483751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.483844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.483869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.483969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.484087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.484149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.484262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.484459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.484484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.484613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.484738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.484790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.484918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.485101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.485157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.485273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.485391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.485417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.485546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.485666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.485693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.485835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.485929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.485955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.486061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.486175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.486231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.486359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.486518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.486572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.486717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.486833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.486895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.487029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.487225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.487278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.487399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.487510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.487535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.487667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.487886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.487917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.488038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.488172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.488216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.488342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.488463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.488489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.488613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.488811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.488863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.488993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.489108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.026 [2024-04-26 14:25:22.489134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.026 qpair failed and we were unable to recover it. 00:20:41.026 [2024-04-26 14:25:22.489244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.489389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.489441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.489601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.489743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.489788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.489905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.490026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.490051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.490146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.490262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.490317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.490445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.490597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.490657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.490785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.490907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.490936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.491053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.491205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.491259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.491380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.491526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.491565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.491691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.491810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.491856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.491975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.492126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.492179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.492277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.492400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.492442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.492542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.492699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.492749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.492861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.493038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.493090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.493190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.493305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.493365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.493542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.493688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.493742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.493917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.494131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.494186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.494355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.494479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.494533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.494639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.494728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.494753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.494902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.495038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.495083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.495275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.495425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.495490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.495696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.495787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.495812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.495945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.496054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.496081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.496171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.496299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.496351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.496554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.496702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.496728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.496859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.497007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.497032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.027 qpair failed and we were unable to recover it. 00:20:41.027 [2024-04-26 14:25:22.497126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.027 [2024-04-26 14:25:22.497231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.497256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.497385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.497514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.497576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.497695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.497852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.497921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.498127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.498264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.498322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.498434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.498578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.498604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.498727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.498883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.498909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.499005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.499126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.499182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.499296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.499465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.499518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.499673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.499769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.499795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.499897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.500025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.500070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.500193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.500343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.500390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.500483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.500600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.500652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.500750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.500861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.500921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.501017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.501135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.501203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.501348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.501520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.501574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.501745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.501891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.501945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.502066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.502265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.502312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.502439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.502543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.502570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.502662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.502775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.502821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.502943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.503060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.503086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.503190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.503280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.503306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.503457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.503574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.503598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.503741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.503875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.503920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.504015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.504127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.504180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.504269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.504388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.504443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.504551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.504739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.504796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.504911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.505106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.505157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.505252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.505354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.505379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.505489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.505640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.505688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.505795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.505909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.028 [2024-04-26 14:25:22.505933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.028 qpair failed and we were unable to recover it. 00:20:41.028 [2024-04-26 14:25:22.506111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.506249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.506303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.506415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.506563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.506588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.506695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.506789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.506814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.506907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.507016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.507060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.507166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.507372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.507420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.507544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.507689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.507728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.507855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.507980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.508041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.508205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.508408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.508457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.508547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.508699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.508751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.508872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.509002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.509046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.509208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.509321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.509347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.509528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.509647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.509673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.509773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.509897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.509950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.510078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.510192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.510219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.510311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.510493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.510543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.510654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.510863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.510916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.511033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.511194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.511247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.511357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.511574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.511623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.511734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.511853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.511905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.511998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.512138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.512165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.512277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.512387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.512411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.512508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.512599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.512628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.512741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.512935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.512985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.513093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.513205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.513232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.513355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.513497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.513542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.513661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.513802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.513847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.513939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.514062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.514109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.514205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.514304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.514329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.514453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.514542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.514566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.514668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.514796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.514877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.029 [2024-04-26 14:25:22.515000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.515143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.029 [2024-04-26 14:25:22.515190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.029 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.515284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.515425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.515478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.515590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.515757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.515815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.515918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.516135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.516185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.516305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.516462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.516488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.516596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3213552 Killed "${NVMF_APP[@]}" "$@" 00:20:41.030 [2024-04-26 14:25:22.516689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.516715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.516842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.516990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.517038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 14:25:22 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:20:41.030 [2024-04-26 14:25:22.517133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 14:25:22 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:20:41.030 [2024-04-26 14:25:22.517289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.517338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 14:25:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:41.030 [2024-04-26 14:25:22.517435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.517526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.517551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 14:25:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.517678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:20:41.030 [2024-04-26 14:25:22.517795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.517820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.517922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.518016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.518042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.518154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.518249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.518275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.518394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.518527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.518552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.518650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.518768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.518832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.519028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.519239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.519283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.519375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.519484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.519539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.519719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.519853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.519878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.519981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.520096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.520139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.520324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.520453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.520515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.520648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.520773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.520798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.520956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.521159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.521212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.521364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.521468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.521516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.521667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.521788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.521833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.521947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.522052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.522086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 [2024-04-26 14:25:22.522208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 14:25:22 -- nvmf/common.sh@470 -- # nvmfpid=3213985 00:20:41.030 [2024-04-26 14:25:22.522345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.522391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 wit 14:25:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:20:41.030 h addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 14:25:22 -- nvmf/common.sh@471 -- # waitforlisten 3213985 00:20:41.030 [2024-04-26 14:25:22.522493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.522597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.522656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 14:25:22 -- common/autotest_common.sh@817 -- # '[' -z 3213985 ']' 00:20:41.030 [2024-04-26 14:25:22.522773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 14:25:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.030 [2024-04-26 14:25:22.522937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.522984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 14:25:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:41.030 [2024-04-26 14:25:22.523092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 14:25:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.030 [2024-04-26 14:25:22.523227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 [2024-04-26 14:25:22.523272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.030 qpair failed and we were unable to recover it. 00:20:41.030 14:25:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:41.030 [2024-04-26 14:25:22.523393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.030 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:20:41.030 [2024-04-26 14:25:22.523509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.523534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.523663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.523806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.523874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.523986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.524077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.524102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.524221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.524357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.524409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.524539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.524679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.524707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.524885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.525013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.525040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.528648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.528757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.528786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.528902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.529005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.529033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.529135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.529251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.529280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.529415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.529548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.529575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.529715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.529852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.529878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.529992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.530106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.530136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.530271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.530402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.530430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.530571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.530701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.530728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.530829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.530931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.530957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.531058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.531213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.531240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.531341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.531447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.531472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.531578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.531680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.531707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.531834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.531928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.531954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.532052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.532147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.532172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.532270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.532393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.532417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.532514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.532655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.532687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.532780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.532875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.532902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.533033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.533154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.533180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.533309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.533409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.533436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.533545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.533652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.533681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.533790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.533915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.533940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.534066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.534161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.534187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.534320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.534410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.534435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.534536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.534672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.534699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.031 qpair failed and we were unable to recover it. 00:20:41.031 [2024-04-26 14:25:22.534825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.031 [2024-04-26 14:25:22.534923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.534951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.535057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.535175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.535201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.535302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.535396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.535424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.535521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.535648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.535675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.535772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.535907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.535933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.536032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.536163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.536189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.536289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.536406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.536431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.536553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.536652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.536680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.536786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.536884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.536911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.537007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.537106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.537131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.537229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.537336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.537361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.537463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.537564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.537590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.537699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.537794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.537820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.537920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.538016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.538041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.538146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.538236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.538261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.538387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.538483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.538509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.538607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.538709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.538736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.538837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.538926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.538951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.539054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.539165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.539192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.539322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.539432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.539460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.539569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.539662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.539690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.539817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.539920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.539945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.540072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.540171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.540201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.540318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.540427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.540454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.540558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.540645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.540670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.032 qpair failed and we were unable to recover it. 00:20:41.032 [2024-04-26 14:25:22.540796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.032 [2024-04-26 14:25:22.540891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.540919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.033 qpair failed and we were unable to recover it. 00:20:41.033 [2024-04-26 14:25:22.541076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.541195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.541226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.033 qpair failed and we were unable to recover it. 00:20:41.033 [2024-04-26 14:25:22.541328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.541458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.541485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.033 qpair failed and we were unable to recover it. 00:20:41.033 [2024-04-26 14:25:22.541584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.541696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.541734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.033 qpair failed and we were unable to recover it. 00:20:41.033 [2024-04-26 14:25:22.541855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.541981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.542019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.033 qpair failed and we were unable to recover it. 00:20:41.033 [2024-04-26 14:25:22.542137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.542270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.542306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.033 qpair failed and we were unable to recover it. 00:20:41.033 [2024-04-26 14:25:22.542419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.542534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.542576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.033 qpair failed and we were unable to recover it. 00:20:41.033 [2024-04-26 14:25:22.542699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.542803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.542830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.033 qpair failed and we were unable to recover it. 00:20:41.033 [2024-04-26 14:25:22.542933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.543039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.543065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.033 qpair failed and we were unable to recover it. 00:20:41.033 [2024-04-26 14:25:22.543160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.543253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.543278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.033 qpair failed and we were unable to recover it. 00:20:41.033 [2024-04-26 14:25:22.543409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.543504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.543528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.033 qpair failed and we were unable to recover it. 00:20:41.033 [2024-04-26 14:25:22.543627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.543738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.543764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.033 qpair failed and we were unable to recover it. 00:20:41.033 [2024-04-26 14:25:22.543865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.543965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.543993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.033 qpair failed and we were unable to recover it. 00:20:41.033 [2024-04-26 14:25:22.544127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.544252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.544288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.033 qpair failed and we were unable to recover it. 00:20:41.033 [2024-04-26 14:25:22.544397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.544502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.033 [2024-04-26 14:25:22.544528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.033 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.544641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.544762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.544789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.544892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.544990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.545021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.545133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.545254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.545293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.545408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.545535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.545571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.545705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.545839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.545872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.546002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.546126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.546161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.546285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.546395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.546421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.546550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.546662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.546695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.546817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.546929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.546957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.547071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.547188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.547215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.547316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.547413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.547440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.547553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.547661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.547693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.547812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.547911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.547936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.548062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.548160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.548185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.548299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.548393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.548418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.548523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.548628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.548661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.548767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.548867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.548895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.549000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.549097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.549122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.549237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.549335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.549361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.549480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.549591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.549615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.549742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.549855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.549880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.549982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.550074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.550098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.550219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.550326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.550351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.550455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.550564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.550589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.550702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.550801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.550825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.550929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.551046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.551074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.551176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.551269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.551296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.551396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.551496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.551521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.551622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.551723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.551747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.551849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.551945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.551970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.552071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.552175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.552201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.326 [2024-04-26 14:25:22.552313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.552436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.326 [2024-04-26 14:25:22.552461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.326 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.552567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.552693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.552719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.552821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.552937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.552962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.553062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.553163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.553190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.553299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.553399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.553428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.553543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.553646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.553674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.553774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.553888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.553914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.554027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.554127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.554157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.554262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.554364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.554390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.554488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.554586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.554612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.554726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.554824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.554849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.554960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.555068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.555094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.555193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.555302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.555328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.555448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.555547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.555573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.555677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.555774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.555799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.555906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.556005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.556031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.556147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.556240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.556265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.556376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.556483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.556509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.556609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.556714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.556742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.556843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.556965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.557001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.557117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.557214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.557240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.557344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.557454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.557479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.557590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.557712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.557739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.557842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.557942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.557974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.558090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.558182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.558208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.558316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.558411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.558436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.558531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.558649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.558676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.558791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.558889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.558918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.559038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.559134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.559165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.559270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.559364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.559389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.559501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.559600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.559626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.559747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.559855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.559881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.559978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.560074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.560101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.560205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.560311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.560337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.560449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.560549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.560574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.560684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.560785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.560817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.560917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.561008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.561038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.561144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.561255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.561280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.561382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.561495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.561523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.561636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.561745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.561779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.561879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.561977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.562003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.327 [2024-04-26 14:25:22.562102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.562217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.327 [2024-04-26 14:25:22.562244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.327 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.562353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.562461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.562485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.562584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.562685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.562712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.562815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.562917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.562944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.563046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.563157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.563182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.563298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.563401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.563436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.563539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.563654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.563682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.563779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.563875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.563901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.563999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.564097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.564124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.564227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.564340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.564365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.564464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.564563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.564594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.564706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.564799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.564824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.564927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.565047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.565072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.565172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.565291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.565316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.565424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.565536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.565561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.565668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.565770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.565795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.565904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.566008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.566035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.566140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.566240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.566264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.566364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.566459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.566484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.566607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.566732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.566760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.566867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.566987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.567013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.567120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.567219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.567244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.567353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.567448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.567472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.567575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.567682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.567708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.567816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.567912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.567937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.568051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.568161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.568185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.568289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.568380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.568404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.568505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.568603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.568644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.568754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.568862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.568886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.568982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.569090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.569117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.569235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.569334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.569361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.569483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.569582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.569609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.569733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.569825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.569850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.569945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.570043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.570069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.570169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.570268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.570296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.570395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.570482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.570507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.570606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.570714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.570740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.570836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.570947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.570973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.571073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.571171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.571195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.571299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.571405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.571434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.571530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.571624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.571662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.328 [2024-04-26 14:25:22.571761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.571857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.328 [2024-04-26 14:25:22.571881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.328 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.571980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.572077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.572106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.572237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.572342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.572367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.572473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.572568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.572594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.572726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.572839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.572864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.572958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.573047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.573071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.573169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.573268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.573294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.573410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.573508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.573533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.573640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.573749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.573776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.573883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.573987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.574011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.574107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.574202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.574227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.574328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.574424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.574448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.574561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.574671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.574697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.574810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.574905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.574932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.575031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.575126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.575151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.575249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.575346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.575372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.575484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.575588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.575614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.575706] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:20:41.329 [2024-04-26 14:25:22.575735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.575771] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.329 [2024-04-26 14:25:22.575830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.575853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.575960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.576059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.576083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.576202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.576303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.576331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.576437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.576532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.576556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.576657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.576767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.576793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.576888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.576989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.577015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.577113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.577220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.577247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.577360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.577466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.577493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.577592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.577695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.577720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.577814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.577925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.577949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.578049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.578145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.578171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.578280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.578386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.578410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.578499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.578605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.578640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.578749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.578853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.578887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.579001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.579092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.579117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.579221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.579319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.579343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.579442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.579541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.579569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.579677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.579783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.579809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.579904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.580009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.580035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.580135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.580233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.329 [2024-04-26 14:25:22.580266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.329 qpair failed and we were unable to recover it. 00:20:41.329 [2024-04-26 14:25:22.580371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.580479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.580505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.580611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.580740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.580766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.580864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.580965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.580991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.581092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.581194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.581218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.581314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.581414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.581440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.581540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.581644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.581671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.581775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.581870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.581894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.582004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.582109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.582134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.582230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.582344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.582369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.582469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.582581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.582605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.582709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.582813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.582837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.582938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.583055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.583080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.583192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.583291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.583316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.583440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.583535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.583561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.583665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.583765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.583792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.583885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.583983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.584009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.584119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.584215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.584242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.584351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.584469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.584493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.584593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.584689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.584714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.584813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.584907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.584931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.585041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.585144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.585168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.585288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.585399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.585426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.585530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.585636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.585663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.585775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.585878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.585904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.586004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.586105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.586132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.586234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.586335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.586361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.586460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.586559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.586584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.586699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.586810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.586842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.586947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.587055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.587081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.587181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.587288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.587313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.587412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.587508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.587534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.587647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.587744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.587769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.587873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.587973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.587999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.588114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.588214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.588247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.588349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.588445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.588471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.588575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.588678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.588703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.588818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.588937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.588962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.589062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.589170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.589195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.589317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.589418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.589443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.589554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.589652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.589678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.589783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.589892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.589917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.590034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.590134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.330 [2024-04-26 14:25:22.590160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.330 qpair failed and we were unable to recover it. 00:20:41.330 [2024-04-26 14:25:22.590257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.590352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.590378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.590476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.590578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.590609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.590724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.590827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.590851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.590953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.591047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.591072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.591187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.591297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.591322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.591423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.591523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.591561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.591707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.591834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.591872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.592002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.592132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.592171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.592298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.592430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.592468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.592599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.592712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.592740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.592850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.592950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.592976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.593085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.593182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.593212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.593310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.593402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.593426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.593525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.593622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.593654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.593758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.593864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.593890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.593979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.594081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.594106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.594201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.594300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.594324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.594441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.594541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.594572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.594679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.594785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.594810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.594911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.595019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.595044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.595156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.595265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.595291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.595387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.595486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.595510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.595611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.595713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.595737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.595840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.595938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.595963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.596077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.596175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.596200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.596299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.596397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.596422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.596512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.596599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.596623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.596738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.596835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.596859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.596956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.597051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.597075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.597177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.597274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.597298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.597392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.597488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.597512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.597609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.597740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.597767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.597883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.597985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.598009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.598099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.598192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.598217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.598312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.598413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.598438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.598542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.598651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.598676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.598773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.598871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.598897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.599004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.599111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.599135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.599233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.599330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.599357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.599459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.599550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.599575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.599675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.599761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.599785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.331 qpair failed and we were unable to recover it. 00:20:41.331 [2024-04-26 14:25:22.599892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.331 [2024-04-26 14:25:22.599988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.600012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.600111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.600206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.600230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.600331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.600422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.600446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.600550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.600656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.600683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.600782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.600875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.600902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.601005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.601103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.601128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.601223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.601318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.601342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.601435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.601521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.601546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.601666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.601763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.601787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.601893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.601990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.602014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.602110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.602203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.602229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.602321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.602416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.602441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.602535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.602649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.602675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.602774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.602874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.602901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.603005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.603112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.603139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.603233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.603332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.603357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.603447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.603543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.603566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.603661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.603760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.603786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.603899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.603990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.604017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.604112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.604202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.604226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.604332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.604429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.604455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.604545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.604696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.604727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.604834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.604935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.604960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.605056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.605160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.605186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.605296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.605392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.605417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.605522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.605615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.605646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.605748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.605841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.605865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.605972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.606065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.606089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.606184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.606286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.606312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.606410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.606508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.606531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.606639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.606740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.606766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.606866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.606966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.606991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.607101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.607203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.607228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.607330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.607422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.607446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.607572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.607705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.607746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.607880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.608003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.608041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.332 [2024-04-26 14:25:22.608173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.608289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.332 [2024-04-26 14:25:22.608327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.332 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.608438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.608541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.608566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.608678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.608782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.608808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.608917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.609016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.609040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.609130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.609219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.609243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.609347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.609444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.609469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.609579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.609686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.609712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.609821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.609912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.609936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.610042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.610146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.610170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.610260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.610355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.610379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.610472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.610560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.610584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.610682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.610773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.610797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.610901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.610988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.611012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.611109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.611198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.611222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.611318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.611408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.611433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.611555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.611665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.611692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.611786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.611891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.611917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.612012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.612115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.612140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.612243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.612339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.612364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.612452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.612548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.612573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.612668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.612756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.333 [2024-04-26 14:25:22.612782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.612883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.612979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.613004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.613099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.613196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.613221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.613309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.613403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.613427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.613535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.613641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.613666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.613775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.613870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.613895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.613997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.614097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.614121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.614223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.614316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.614341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.614438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.614551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.614576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.614694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.614812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.614838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.614938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.615023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.615047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.615154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.615251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.615277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.615382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.615468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.615492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.615593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.615699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.615730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.615867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.616019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.616045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.616145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.616274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.616298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.616400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.616500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.616525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.616623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.616741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.616767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.616879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.617007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.617034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.617141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.617239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.617264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.617361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.617457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.617481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.617575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.617695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.617722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.333 [2024-04-26 14:25:22.617836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.617930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.333 [2024-04-26 14:25:22.617964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.333 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.618065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.618175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.618199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.618296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.618387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.618411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.618517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.618617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.618657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.618760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.618863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.618892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.618997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.619091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.619115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.619236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.619364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.619389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.619488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.619591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.619616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.619726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.619855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.619881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.619981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.620077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.620101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.620205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.620302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.620327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.620423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.620517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.620543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.620650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.620763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.620789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.620886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.620979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.621004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.621122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.621223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.621249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.621368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.621457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.621482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.621587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.621702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.621729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.621824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.621924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.621950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.622042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.622144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.622169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.622274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.622390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.622416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.622519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.622617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.622667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.622784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.622877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.622902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.623017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.623107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.623133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.623231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.623327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.623352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.623454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.623550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.623582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.623697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.623794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.623818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.623942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.624036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.624061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.624156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.624267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.624292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.624399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.624496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.624520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.624621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.624733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.624758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.624853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.624950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.624977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.625084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.625182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.625208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.625318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.625417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.625443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.625546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.625645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.625674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.625776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.625873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.625898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.626028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.626159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.626189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.626288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.626389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.626415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.626525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.626623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.626664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.626795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.626904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.626932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.627034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.627133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.627160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.627265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.627378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.627404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.627513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.627616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.334 [2024-04-26 14:25:22.627651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.334 qpair failed and we were unable to recover it. 00:20:41.334 [2024-04-26 14:25:22.627753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.627855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.627880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.627976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.628076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.628101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.628206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.628300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.628326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.628464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.628588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.628622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.628763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.628872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.628904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.629034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.629155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.629186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.629312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.629455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.629483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.629591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.629705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.629732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.629834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.629944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.629969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.630070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.630222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.630248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.630354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.630452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.630479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.630579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.630681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.630708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.630819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.630917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.630942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.631088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.631191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.631218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.631326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.631427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.631454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.631555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.631708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.631735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.631841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.631949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.631974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.632074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.632172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.632198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.632293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.632406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.632431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.632534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.632642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.632669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.632770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.632868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.632894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.632993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.633093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.633119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.633218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.633321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.633347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.633446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.633543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.633568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.633675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.633772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.633797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.633951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.634050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.634075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.634202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.634299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.634324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.634457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.634556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.634583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.634700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.634831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.634858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.634958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.635058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.635084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.635211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.635311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.635336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.635441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.635538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.635565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.635671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.635801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.635826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.635929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.636067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.636093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.636201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.636299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.636324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.636434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.636557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.636583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.636809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.636910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.636937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.335 qpair failed and we were unable to recover it. 00:20:41.335 [2024-04-26 14:25:22.637040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.335 [2024-04-26 14:25:22.637130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.637155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.637374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.637502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.637529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.637638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.637746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.637772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.637896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.637994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.638021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.638149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.638254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.638280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.638410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.638542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.638567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.638667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.638770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.638796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.638895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.639114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.639140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.639244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.639346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.639373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.639473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.639571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.639598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.639699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.639917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.639943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.640042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.640131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.640157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.640295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.640393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.640418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.640542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.640645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.640671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.640774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.640903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.640928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.641028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.641120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.641146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.641252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.641357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.641388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.641489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.641592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.641617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.641756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.641885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.641911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.642013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.642114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.642141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.642271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.642368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.642394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.642494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.642596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.642621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.642732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.642827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.642853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.642950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.643046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.643072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.643293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.643395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.643422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.643517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.643616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.643651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.643752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.643858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.643888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.643994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.644121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.644147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.644247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.644371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.644397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.644505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.644598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.644623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.644763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.644862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.644888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.644990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.645089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.645115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.645218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.645349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.645376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.645475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.645574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.645601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.645749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.645847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.645873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.646003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.646037] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:41.336 [2024-04-26 14:25:22.646109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.646137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.646238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.646352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.646378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.646478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.646601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.646626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.646765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.646861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.646886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.646988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.647084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.647110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.647232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.647336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.647361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.336 qpair failed and we were unable to recover it. 00:20:41.336 [2024-04-26 14:25:22.647467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.336 [2024-04-26 14:25:22.647563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.647590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.647714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.647868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.647894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.647996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.648087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.648113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.648254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.648348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.648373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.648477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.648578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.648603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.648717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.648823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.648849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.648951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.649075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.649102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.649222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.649320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.649346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.649449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.649561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.649587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.649693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.649805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.649831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.649952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.650056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.650081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.650185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.650285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.650310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.650412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.650532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.650558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.650665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.650763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.650790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.650904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.650999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.651024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.651128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.651244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.651274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.651380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.651479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.651504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.651621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.651765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.651792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.651908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.652015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.652041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.652139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.652238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.652265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.652393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.652487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.652513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.652613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.652721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.652748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.652858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.652972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.652997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.653102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.653198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.653225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.653324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.653426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.653452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.653552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.653654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.653686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.653899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.654007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.654034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.654146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.654257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.654283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.654388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.654508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.654533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.654644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.654765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.654791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.654911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.655007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.655033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.655129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.655228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.655255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.655366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.655465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.655491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.655589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.655710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.655737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.655839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.655957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.655984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.656113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.656227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.656260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.656368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.656466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.656492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.656599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.656725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.656751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.656972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.657071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.657099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.657194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.657310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.657336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.657449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.657561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.657587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.657692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.657792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.337 [2024-04-26 14:25:22.657817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.337 qpair failed and we were unable to recover it. 00:20:41.337 [2024-04-26 14:25:22.657921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.658031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.658057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.658159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.658261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.658288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.658390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.658499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.658524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.658639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.658739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.658765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.658877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.658981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.659006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.659112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.659209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.659235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.659349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.659449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.659474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.659572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.659675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.659703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.659814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.659915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.659942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.660042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.660135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.660161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.660254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.660357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.660383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.660492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.660587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.660613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.660721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.660821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.660849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.660950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.661040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.661066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.661174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.661276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.661302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.661402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.661506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.661532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.661637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.661736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.661763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.661879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.661980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.662005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.662102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.662198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.662223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.662340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.662432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.662457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.662556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.662659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.662685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.662842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.662942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.662967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.663076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.663183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.663210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.663307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.663402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.663427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.663533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.663643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.663669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.663781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.663877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.663903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.664006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.664106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.664132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.664285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.664384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.664411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.664517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.664617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.664650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.664759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.664859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.664885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.664984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.665082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.665107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.665210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.665320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.665345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.665449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.665551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.665578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.665679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.665777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.665804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.665919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.666024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.666050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.666149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.666247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.666274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.666375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.666466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.338 [2024-04-26 14:25:22.666492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.338 qpair failed and we were unable to recover it. 00:20:41.338 [2024-04-26 14:25:22.666594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.666699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.666726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.666828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.666920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.666946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.667165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.667262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.667289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.667386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.667489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.667514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.667611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.667751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.667777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.667987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.668086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.668113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.668215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.668309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.668336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.668451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.668563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.668588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.668698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.668798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.668826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.668927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.669024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.669049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.669167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.669271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.669296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.669398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.669503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.669530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.669751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.669863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.669890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.669989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.670085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.670111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.670207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.670300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.670325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.670432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.670540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.670565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.670679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.670773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.670799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.670916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.671029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.671054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.671163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.671270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.671295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.671397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.671495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.671521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.671640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.671748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.671774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.671874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.671972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.671998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.672115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.672226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.672251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.672359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.672458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.672483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.672585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.672706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.672733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.672842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.673050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.673078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.673177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.673294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.673320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.673421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.673518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.673545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.673650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.673749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.673774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.673874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.673974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.674001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.674100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.674201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.674227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.674331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.674431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.674456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.674553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.674658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.674684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.674788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.674880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.674906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.675007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.675104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.675129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.675227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.675325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.675351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.675450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.675549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.675575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.675710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.675821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.675847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.675977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.676078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.676105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.676214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.676316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.676342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.339 qpair failed and we were unable to recover it. 00:20:41.339 [2024-04-26 14:25:22.676458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.676559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.339 [2024-04-26 14:25:22.676583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.676701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.676814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.676840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.676941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.677048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.677072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.677179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.677293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.677317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.677413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.677513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.677538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.677647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.677752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.677779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.677878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.677994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.678019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.678140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.678245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.678270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.678375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.678482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.678507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.678605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.678710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.678737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.678836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.678943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.678968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.679078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.679190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.679216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.679313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.679423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.679448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.679545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.679656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.679684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.679779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.679877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.679902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.679996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.680105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.680130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.680241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.680359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.680384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.680481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.680577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.680609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.680729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.680842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.680867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.680970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.681068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.681094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.681196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.681299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.681324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.681427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.681534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.681559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.681672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.681768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.681793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.681892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.681990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.682016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.682118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.682213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.682237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.682329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.682431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.682461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.682564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.682668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.682695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.682799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.682894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.682919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.683023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.683125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.683150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.683252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.683347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.683374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.683478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.683570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.683595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.683698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.683793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.683819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.683915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.684012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.684036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.684138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.684227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.684252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.684357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.684451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.684478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.684575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.684703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.684730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.684828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.684927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.684952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.685062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.685151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.685175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.685270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.685362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.685386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.685479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.685577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.685601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.685707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.685798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.685822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.685917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.686013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.686039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.340 qpair failed and we were unable to recover it. 00:20:41.340 [2024-04-26 14:25:22.686137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.686231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.340 [2024-04-26 14:25:22.686256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.686348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.686454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.686479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.686573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.686669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.686694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.686788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.686897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.686921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.687021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.687123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.687148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.687242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.687334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.687358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.687576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.687683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.687710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.687824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.687935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.687962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.688064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.688164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.688191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.688301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.688401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.688427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.688521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.688613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.688645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.688747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.688843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.688868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.688981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.689076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.689103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.689201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.689309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.689334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.689432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.689642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.689668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.689773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.689978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.690003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.690102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.690195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.690220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.690340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.690439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.690467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.690572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.690667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.690694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.690811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.690914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.690939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.691031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.691139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.691165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.691262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.691367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.691393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.691491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.691590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.691615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.691743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.691860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.691890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.691998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.692097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.692123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.692224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.692319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.692344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.692445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.692546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.692572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.692680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.692785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.692811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.692907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.693003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.693028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.693133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.693227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.693252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.693348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.693445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.693470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.693569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.693659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.693685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.693787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.693885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.693911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.694007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.694105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.694130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.694231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.694328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.694353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.694451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.694543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.694567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.694668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.694769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.694794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.694895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.695000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.695025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.695123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.695212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.695236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.695331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.695427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.695454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.695550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.695642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.695667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.695764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.695854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.695879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.341 qpair failed and we were unable to recover it. 00:20:41.341 [2024-04-26 14:25:22.695983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.341 [2024-04-26 14:25:22.696079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.696106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.696201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.696292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.696317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.696426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.696515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.696540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.696635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.696736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.696760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.696856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.696949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.696973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.697077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.697170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.697195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.697285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.697377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.697402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.697497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.697588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.697613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.697721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.697810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.697835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.697935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.698024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.698048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.698139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.698234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.698258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.698352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.698437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.698461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.698550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.698643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.698668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.698758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.698851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.698875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.698967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.699054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.699082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.699175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.699265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.699288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.699381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.699474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.699500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.699592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.699687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.699713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.699812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.699907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.699931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.700020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.700112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.700136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.700224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.700315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.700340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.700434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.700523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.700548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.700659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.700756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.700780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.700877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.700961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.700985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.701084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.701180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.701207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.701304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.701397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.701423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.701520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.701607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.701642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.701738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.701831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.701857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.701949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.702044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.702071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.702167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.702257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.702281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.702370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.702459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.702483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.702580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.702672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.702699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.702796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.702887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.702911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.703001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.703097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.703122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.703230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.703316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.703340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.703443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.703548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.703575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.703720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.703819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.342 [2024-04-26 14:25:22.703844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.342 qpair failed and we were unable to recover it. 00:20:41.342 [2024-04-26 14:25:22.703935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.704025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.704050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.704152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.704253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.704279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.704370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.704468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.704495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.704588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.704683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.704709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.704808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.704904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.704931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.705030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.705120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.705145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.705242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.705327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.705352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.705452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.705548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.705574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.705688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.705784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.705809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.705923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.706015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.706039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.706140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.706233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.706258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.706360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.706457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.706481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.706571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.706671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.706698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.706790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.706875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.706900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.707006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.707137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.707164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.707259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.707356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.707381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.707480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.707587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.707614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.707713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.707806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.707831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.707932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.708023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.708049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.708150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.708245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.708272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.708370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.708487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.708512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.708604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.708711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.708739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.708845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.708946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.708974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.709080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.709177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.709204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.709303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.709395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.709419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.709512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.709611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.709643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.709745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.709842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.709870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.709966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.710059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.710084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.710174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.710278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.710303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.710400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.710495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.710521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.710618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.710716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.710741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.710828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.710916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.710940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.711063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.711160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.711185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.711284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.711381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.711406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.711510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.711607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.711640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.711754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.711851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.711876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.711978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.712070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.712096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.712189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.712293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.712319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.712408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.712499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.712528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.712628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.712756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.712781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.712875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.712966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.712990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.343 [2024-04-26 14:25:22.713089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.713179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.343 [2024-04-26 14:25:22.713205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.343 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.713302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.713389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.713413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.713512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.713607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.713639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.713734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.713847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.713873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.713982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.714082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.714108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.714207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.714294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.714319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.714451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.714555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.714583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.714698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.714792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.714824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.714940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.715046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.715072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.715171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.715260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.715285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.715390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.715486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.715512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.715604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.715722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.715748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.715844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.715935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.715960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.716053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.716148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.716173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.716278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.716376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.716401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.716494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.716589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.716613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.716717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.716820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.716845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.716941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.717033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.717057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.717155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.717246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.717270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.717367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.717462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.717487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.717584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.717687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.717714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.717818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.717918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.717943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.718037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.718128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.718153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.718246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.718340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.718365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.718460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.718554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.718578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.718677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.718773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.718798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.718893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.718978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.719003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.719101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.719193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.719219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.719320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.719416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.719441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.719544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.719642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.719676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.719776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.719875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.719899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.719998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.720089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.720114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.720206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.720298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.720322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.720420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.720523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.720548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.720659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.720756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.720780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.720887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.720985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.721011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.721105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.721210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.721244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.721348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.721449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.721475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.721574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.721674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.721700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.721803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.721896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.721921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.722011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.722138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.722163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.722257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.722352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.722387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.344 qpair failed and we were unable to recover it. 00:20:41.344 [2024-04-26 14:25:22.722484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.722576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.344 [2024-04-26 14:25:22.722600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.722715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.722806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.722831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.722922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.723007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.723031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.723134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.723223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.723248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.723342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.723442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.723467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.723568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.723659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.723687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.723788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.723885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.723909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.724009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.724107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.724133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.724227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.724323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.724348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.724449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.724542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.724566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.724656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.724745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.724769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.724857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.724946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.724971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.725072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.725171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.725197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.725291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.725384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.725408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.725504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.725596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.725620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.725735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.725835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.725860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.725953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.726046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.726076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.726172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.726266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.726291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.726390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.726483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.726507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.726606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.726735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.726762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.726889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.726987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.727011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.727106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.727210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.727235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.727329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.727425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.727451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.727547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.727661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.727688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.727787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.727888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.727915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.728013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.728114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.728139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.728233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.728322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.728348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.728453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.728549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.728574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.728687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.728788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.728816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.728923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.729023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.729049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.729148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.729250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.729277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.729370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.729462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.729487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.729596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.729704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.729731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.729838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.729962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.729988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.730091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.730185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.730211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.730301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.730397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.730422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.730532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.730628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.345 [2024-04-26 14:25:22.730661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.345 qpair failed and we were unable to recover it. 00:20:41.345 [2024-04-26 14:25:22.730772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.730871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.730898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.731032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.731128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.731154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.731248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.731342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.731369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.731463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.731558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.731584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.731704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.731798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.731823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.731922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.732019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.732044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.732143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.732235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.732260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.732364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.732448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.732472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.732575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.732674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.732701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.732805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.732898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.732922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.733031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.733155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.733180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.733270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.733363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.733388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.733485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.733574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.733600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.733701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.733821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.733845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.733942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.734036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.734060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.734154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.734246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.734271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.734373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.734479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.734507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.734610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.734713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.734740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.734883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.734978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.735004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.735133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.735237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.735263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.735415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.735506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.735531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.735626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.735737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.735765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.735869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.735967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.735994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.736087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.736214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.736239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.736336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.736437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.736465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.736594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.736716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.736744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.736847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.736940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.736965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.737066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.737151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.737176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.737271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.737366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.737391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.737493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.737622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.737660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.737779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.737894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.737923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.738026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.738123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.738149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.738276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.738376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.738403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.738511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.738604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.738639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.738768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.738896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.738922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.739023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.739117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.739144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.739246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.739342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.739368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.739464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.739551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.739576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.739672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.739771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.739797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.739899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.739993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.740019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.740120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.740245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.740270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.346 qpair failed and we were unable to recover it. 00:20:41.346 [2024-04-26 14:25:22.740394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.346 [2024-04-26 14:25:22.740490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.740514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.740608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.740706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.740731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.740860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.740988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.741014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.741119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.741214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.741241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.741336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.741426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.741451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.741549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.741650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.741676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.741776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.741870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.741897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.742022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.742110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.742135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.742228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.742331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.742357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.742456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.742560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.742585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.742724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.742824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.742852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.742985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.743081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.743107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.743201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.743297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.743322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.743416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.743508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.743534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.743641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.743767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.743793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.743900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.743996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.744022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.744120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.744220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.744248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.744352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.744446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.744471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.744566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.744692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.744719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.744822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.744946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.744977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.745074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.745164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.745190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.745318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.745434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.745461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.745557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.745672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.745700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.745831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.745931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.745955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.746056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.746188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.746213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.746337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.746437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.746463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.746560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.746657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.746682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.746779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.746878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.746903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.746997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.747119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.747143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.747235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.747358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.747390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.747493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.747586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.747610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.747716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.747820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.747847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.747952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.748066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.748091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.748214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.748311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.748336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.748438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.748529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.748555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.748656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.748798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.748824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.748922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.749061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.749087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.749181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.749274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.749299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.749391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.749489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.749516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.749605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.749706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.749744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.749843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.749936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.749961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.347 [2024-04-26 14:25:22.750071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.750167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.347 [2024-04-26 14:25:22.750192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.347 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.750293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.750382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.750407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.750510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.750609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.750653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.750750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.750845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.750870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.750973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.751063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.751086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.751178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.751286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.751312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.751412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.751502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.751526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.751625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.751750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.751778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.751875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.752052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.752078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.752183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.752281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.752306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.752400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.752495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.752521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.752615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.752713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.752739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.752840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.752941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.752967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.753063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.753192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.753218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.753320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.753445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.753472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.753575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.753669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.753696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.753792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.753913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.753939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.754034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.754124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.754150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.754249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.754341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.754367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.754470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.754566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.754593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.754695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.754796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.754821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.754916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.755011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.755037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.755132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.755233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.755257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.755359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.755450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.755475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.755567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.755662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.755688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.755816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.755911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.755937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.756046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.756194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.756220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.756350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.756443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.756468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.756602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.756698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.756726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.756824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.756954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.756979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.757076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.757174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.757201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.757300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.757394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.757419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.757511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.757597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.757622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.757725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.757815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.757840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.757939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.758062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.758088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.758176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.758303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.758329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.758423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.758518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.348 [2024-04-26 14:25:22.758542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.348 qpair failed and we were unable to recover it. 00:20:41.348 [2024-04-26 14:25:22.758648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.758749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.758773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.758867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.758964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.758991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.759095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.759195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.759223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.759318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.759413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.759439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.759532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.759627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.759660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.759763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.759861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.759887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.759983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.760075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.760100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.760201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.760297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.760323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.760420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.760550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.760575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.760677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.760803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.760829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.760923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.761014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.761039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.761137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.761236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.761264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.761360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.761496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.761523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.761616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.761723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.761751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.761851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.761949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.761976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.762084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.762173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.762199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.762299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.762397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.762425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.762548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.762644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.762671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.762770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.762872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.762900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.763007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.763112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.763139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.763235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.763335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.763362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.763459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.763553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.763578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.763680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.763779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.763805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.763932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.764033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.764058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.764159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.764284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.764309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.764438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.764532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.764559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.764653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.764748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.764775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.764877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.765001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.765028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.765127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.765225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.765251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.765345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.765439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.765465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.765559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.765668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.765695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.765793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.765887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.765914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.766018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.766131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.766156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.766252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.766357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.766382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.766488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.766577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.766602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.766713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.766828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.766853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.766944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.767032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.767056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.767168] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.349 [2024-04-26 14:25:22.767178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.767203] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.349 [2024-04-26 14:25:22.767219] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:41.349 [2024-04-26 14:25:22.767249] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:41.349 [2024-04-26 14:25:22.767263] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.349 [2024-04-26 14:25:22.767282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.767308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.767325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:41.349 [2024-04-26 14:25:22.767429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.767376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:41.349 [2024-04-26 14:25:22.767572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.349 [2024-04-26 14:25:22.767617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.349 qpair failed and we were unable to recover it. 00:20:41.349 [2024-04-26 14:25:22.767759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.767775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:20:41.350 [2024-04-26 14:25:22.767800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:41.350 [2024-04-26 14:25:22.767903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.767939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.768071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.768191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.768223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.768350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.768460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.768490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.768637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.768753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.768783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.768934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.769050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.769081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.769201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.769304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.769329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.769428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.769528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.769554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.769657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.769752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.769776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.769874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.769971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.769996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.770097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.770199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.770223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.770319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.770435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.770459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.770549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.770654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.770680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.770781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.770882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.770908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.771017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.771110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.771137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.771231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.771346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.771371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.771465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.771564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.771588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.771754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.771848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.771872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.771970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.772094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.772118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.772211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.772307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.772332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.772429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.772537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.772564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.772670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.772780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.772805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.772901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.772999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.773025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.773128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.773220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.773245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.773353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.773447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.773472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.773570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.773673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.773699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.773810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.773915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.773942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.774050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.774149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.774174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.774274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.774362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.774388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.774491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.774596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.774622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.774728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.774832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.774859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.774964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.775061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.775088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.775194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.775294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.775325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.775421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.775537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.775563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.775669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.775770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.775796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.775898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.776006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.776034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.776129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.776225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.776251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.776344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.776437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.776462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.776563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.776662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.776688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.776786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.776889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.776916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.777014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.777111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.777137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.777235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.777326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.777351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.350 qpair failed and we were unable to recover it. 00:20:41.350 [2024-04-26 14:25:22.777453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.777565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.350 [2024-04-26 14:25:22.777595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.777698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.777794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.777819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.777925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.778031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.778059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.778158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.778264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.778292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.778389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.778485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.778510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.778615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.778723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.778751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.778852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.778957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.778983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.779090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.779186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.779212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.779307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.779408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.779434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.779537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.779638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.779665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.779762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.779859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.779892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.779994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.780092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.780120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.780223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.780319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.780345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.780443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.780537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.780563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.780731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.780840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.780868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.780964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.781057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.781083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.781210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.781313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.781339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.781440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.781532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.781557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.781656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.781758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.781786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.781891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.782000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.782027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.782129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.782256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.782282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.782388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.782489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.782515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.782617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.782722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.782748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.782846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.782943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.782969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.783070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.783164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.783190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.783282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.783376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.783402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.783502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.783616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.783647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.783745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.783842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.783869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.783976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.784071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.784098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.784206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.784301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.784326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.784421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.784521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.784548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.784660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.784751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.784777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.784899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.784999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.785027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.785137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.785241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.785267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.785376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.785474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.785499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.785594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.785703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.785730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.785822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.785920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.785946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.351 qpair failed and we were unable to recover it. 00:20:41.351 [2024-04-26 14:25:22.786054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.351 [2024-04-26 14:25:22.786153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.786178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.786283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.786380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.786406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.786503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.786603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.786629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.786735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.786835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.786860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.786967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.787067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.787092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.787196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.787298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.787325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.787423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.787522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.787547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.787647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.787749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.787774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.787899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.788009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.788035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.788136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.788230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.788255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.788352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.788447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.788473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.788562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.788663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.788690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.788791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.788891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.788918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.789016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.789116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.789144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.789245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.789342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.789368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.789473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.789572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.789599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.789725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.789822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.789848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.789942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.790044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.790071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.790163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.790284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.790311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.790427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.790525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.790550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.790666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.790769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.790796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.790925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.791011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.791036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.791152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.791250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.791274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.791374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.791473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.791498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.791601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.791710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.791739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.791840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.791945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.791971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.792068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.792161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.792187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.792282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.792376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.792401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.792514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.792619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.792654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.792757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.792851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.792876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.792977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.793075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.793101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.793195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.793281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.793305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.793398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.793487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.793511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.793620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.793730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.793754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.793869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.793974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.793998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.794100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.794197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.794222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.794324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.794428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.794453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.794549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.794663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.794690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.794789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.794894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.794919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.795012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.795111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.795135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.795240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.795344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.795368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.352 [2024-04-26 14:25:22.795521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.795620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.352 [2024-04-26 14:25:22.795650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.352 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.795752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.795847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.795872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.795976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.796079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.796104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.796217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.796319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.796344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.796439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.796542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.796567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.796669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.796780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.796807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.796903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.797005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.797029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.797130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.797225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.797251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.797389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.797504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.797533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.797645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.797748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.797776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.797910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.798013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.798039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.798132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.798226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.798252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.798351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.798460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.798487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.798592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.798709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.798749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.798855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.798959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.798987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.799092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.799204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.799230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.799331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.799436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.799463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.799570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.799697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.799724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.799829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.799933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.799960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.800071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.800176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.800204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.800335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.800431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.800457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.800551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.800661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.800688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.800797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.800903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.800929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.801031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.801138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.801172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.801275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.801381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.801410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.801515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.801626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.801700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.801814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.801922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.801949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.802076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.802184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.802210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.802313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.802419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.802448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.802550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.802654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.802681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.802789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.802945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.802989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.803121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.803251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.803289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.803421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.803544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.803571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.803683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.803785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.803817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.803922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.804020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.804047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.804173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.804280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.804307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.804409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.804518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.804546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.804660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.804762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.804787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.804894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.804997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.805026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.805135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.805229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.805255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.805353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.805449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.805475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.805575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.805672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.805699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.353 qpair failed and we were unable to recover it. 00:20:41.353 [2024-04-26 14:25:22.805798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.805905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.353 [2024-04-26 14:25:22.805935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.806039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.806137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.806163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.806276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.806382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.806408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.806511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.806615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.806649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.806754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.806873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.806899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.807004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.807109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.807136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.807252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.807355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.807381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.807487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.807588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.807614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.807728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.807856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.807883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.807983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.808089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.808116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.808219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.808327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.808363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.808493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.808599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.808626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.808780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.808889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.808915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.809023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.809124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.809151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.809253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.809355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.809382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.809485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.809586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.809610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.809737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.809851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.809879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.809977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.810075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.810103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.810199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.810309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.810335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.810458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.810558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.810585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.810694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.810788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.810815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.810921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.811015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.811042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.811155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.811256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.811281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.811397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.811501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.811526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.811617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.811725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.811750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.811857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.811967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.811994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.812104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.812199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.812223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.812323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.812422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.812448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.812578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.812712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.812741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.812843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.812951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.812977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.813075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.813178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.813211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.813319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.813429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.813456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124d340 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.813603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.813740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.813781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.813923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.814040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.814072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a68000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.814201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.814308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.814335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.814440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.814544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.814571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.814693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.814801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.814828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.354 qpair failed and we were unable to recover it. 00:20:41.354 [2024-04-26 14:25:22.814930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.354 [2024-04-26 14:25:22.815059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.815086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.815190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.815296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.815322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.815424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.815528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.815556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.815676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.815778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.815804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.815917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.816020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.816046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.816156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.816264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.816290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.816392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.816489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.816515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.816616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.816727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.816755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.816856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.816963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.816990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.817092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.817198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.817224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.817328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.817438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.817464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.817568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.817667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.817694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.817791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.817889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.817915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.818015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.818117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.818142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.818242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.818337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.818362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.818458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.818563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.818589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.818694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.818792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.818819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.818924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.819032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.819057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.819154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.819264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.819290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.819391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.819488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.819514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.819620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.819731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.819756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.819852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.819959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.819986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.820098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.820201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.820227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.820326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.820435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.820461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.820557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.820661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.820688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.820789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.820898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.820924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.821023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.821130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.821156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.821259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.821362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.821387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.821485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.821586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.821612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.821718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.821843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.821870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.821972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.822068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.822093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.822194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.822298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.822322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.822428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.822530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.822556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.822660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.822764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.822791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.822904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.823030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.823055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.823156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.823273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.823299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.823404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.823500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.823527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.823628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.823740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.823766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.823873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.824001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.824026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.824136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.824237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.824263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.824369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.824470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.824495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.355 [2024-04-26 14:25:22.824597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.824710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.355 [2024-04-26 14:25:22.824736] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.355 qpair failed and we were unable to recover it. 00:20:41.356 [2024-04-26 14:25:22.824856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.824956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.824981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 [2024-04-26 14:25:22.825092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.825191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.825217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 [2024-04-26 14:25:22.825325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.825424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.825449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 [2024-04-26 14:25:22.825555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.825667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.825694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 [2024-04-26 14:25:22.825792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.825883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.825909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 [2024-04-26 14:25:22.826008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.826107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.826133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 [2024-04-26 14:25:22.826243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.826348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.826373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 [2024-04-26 14:25:22.826470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.826569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.826594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 [2024-04-26 14:25:22.826706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.826805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.826830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 [2024-04-26 14:25:22.826935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.827043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.827070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 [2024-04-26 14:25:22.827169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.827260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.827286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 [2024-04-26 14:25:22.827389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.827496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.827522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 [2024-04-26 14:25:22.827619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.827727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.827751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 [2024-04-26 14:25:22.827886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.827987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.828017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 [2024-04-26 14:25:22.828144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.828245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.828271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 [2024-04-26 14:25:22.828368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.828469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.828494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a60000b90 with addr=10.0.0.2, port=4420 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 A controller has encountered a failure and is being reset. 00:20:41.356 [2024-04-26 14:25:22.828641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.828775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.828815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 [2024-04-26 14:25:22.828959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.829082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.829120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 [2024-04-26 14:25:22.829258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.829378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.829416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 [2024-04-26 14:25:22.829544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.829672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.829710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a58000b90 with addr=10.0.0.2, port=4420 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 qpair failed and we were unable to recover it. 00:20:41.356 [2024-04-26 14:25:22.829870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.830022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:41.356 [2024-04-26 14:25:22.830056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124c1c0 with addr=10.0.0.2, port=4420 00:20:41.356 [2024-04-26 14:25:22.830076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124c1c0 is same with the state(5) to be set 00:20:41.356 [2024-04-26 14:25:22.830102] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124c1c0 (9): Bad file descriptor 00:20:41.356 [2024-04-26 14:25:22.830122] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.356 [2024-04-26 14:25:22.830136] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:41.356 [2024-04-26 14:25:22.830152] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.356 Unable to reset the controller. 00:20:41.643 14:25:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:41.643 14:25:22 -- common/autotest_common.sh@850 -- # return 0 00:20:41.643 14:25:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:41.643 14:25:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:41.643 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:20:41.643 14:25:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.643 14:25:22 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:41.643 14:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.643 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:20:41.643 Malloc0 00:20:41.643 14:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.644 14:25:22 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:41.644 14:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.644 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:20:41.644 [2024-04-26 14:25:22.949594] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.644 14:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.644 14:25:22 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:41.644 14:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.644 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:20:41.644 14:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.644 14:25:22 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:41.644 14:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.644 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:20:41.644 14:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.644 14:25:22 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:41.644 14:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.644 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:20:41.644 [2024-04-26 14:25:22.977824] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.644 14:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.644 14:25:22 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:41.644 14:25:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.644 14:25:22 -- common/autotest_common.sh@10 -- # set +x 00:20:41.644 14:25:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.644 14:25:22 -- host/target_disconnect.sh@58 -- # wait 3213581 00:20:42.593 Controller properly reset. 00:20:47.855 Initializing NVMe Controllers 00:20:47.855 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:47.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:47.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:20:47.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:20:47.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:20:47.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:20:47.855 Initialization complete. Launching workers. 00:20:47.855 Starting thread on core 1 00:20:47.855 Starting thread on core 2 00:20:47.855 Starting thread on core 3 00:20:47.855 Starting thread on core 0 00:20:47.855 14:25:28 -- host/target_disconnect.sh@59 -- # sync 00:20:47.855 00:20:47.855 real 0m10.662s 00:20:47.855 user 0m32.829s 00:20:47.855 sys 0m7.819s 00:20:47.855 14:25:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:47.855 14:25:28 -- common/autotest_common.sh@10 -- # set +x 00:20:47.855 ************************************ 00:20:47.855 END TEST nvmf_target_disconnect_tc2 00:20:47.855 ************************************ 00:20:47.855 14:25:28 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:20:47.855 14:25:28 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:20:47.855 14:25:28 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:20:47.855 14:25:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:47.855 14:25:28 -- nvmf/common.sh@117 -- # sync 00:20:47.855 14:25:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:47.855 14:25:28 -- nvmf/common.sh@120 -- # set +e 00:20:47.855 14:25:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:47.855 14:25:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:47.855 rmmod nvme_tcp 00:20:47.855 rmmod nvme_fabrics 00:20:47.855 rmmod nvme_keyring 00:20:47.855 14:25:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:47.855 14:25:28 -- nvmf/common.sh@124 -- # set -e 00:20:47.855 14:25:28 -- nvmf/common.sh@125 -- # return 0 00:20:47.855 14:25:28 -- nvmf/common.sh@478 -- # '[' -n 3213985 ']' 00:20:47.855 14:25:28 -- nvmf/common.sh@479 -- # killprocess 3213985 00:20:47.855 14:25:28 -- common/autotest_common.sh@936 -- # '[' -z 3213985 ']' 00:20:47.855 14:25:28 -- common/autotest_common.sh@940 -- # kill -0 3213985 00:20:47.855 14:25:28 -- common/autotest_common.sh@941 -- # uname 00:20:47.855 14:25:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:47.855 14:25:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3213985 00:20:47.855 14:25:28 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:20:47.855 14:25:28 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:20:47.855 14:25:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3213985' 00:20:47.855 killing process with pid 3213985 00:20:47.855 14:25:28 -- common/autotest_common.sh@955 -- # kill 3213985 00:20:47.855 14:25:28 -- common/autotest_common.sh@960 -- # wait 3213985 00:20:47.855 14:25:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:47.855 14:25:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:47.855 14:25:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:47.855 14:25:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:47.855 14:25:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:47.855 14:25:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.855 14:25:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:47.855 14:25:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.764 14:25:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:49.764 00:20:49.764 real 0m15.213s 00:20:49.764 user 0m57.537s 00:20:49.764 sys 0m10.142s 00:20:49.764 14:25:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:49.764 14:25:31 -- common/autotest_common.sh@10 -- # set +x 00:20:49.764 ************************************ 00:20:49.764 END TEST nvmf_target_disconnect 00:20:49.764 ************************************ 00:20:49.764 14:25:31 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:20:49.764 14:25:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:49.764 14:25:31 -- common/autotest_common.sh@10 -- # set +x 00:20:49.764 14:25:31 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:20:49.764 00:20:49.764 real 14m54.875s 00:20:49.764 user 35m22.151s 00:20:49.764 sys 3m50.949s 00:20:49.764 14:25:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:49.764 14:25:31 -- common/autotest_common.sh@10 -- # set +x 00:20:49.764 ************************************ 00:20:49.764 END TEST nvmf_tcp 00:20:49.764 ************************************ 00:20:49.764 14:25:31 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:20:49.764 14:25:31 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:20:49.764 14:25:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:49.764 14:25:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:49.764 14:25:31 -- common/autotest_common.sh@10 -- # set +x 00:20:49.764 ************************************ 00:20:49.764 START TEST spdkcli_nvmf_tcp 00:20:49.764 ************************************ 00:20:49.764 14:25:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:20:50.024 * Looking for test storage... 00:20:50.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:20:50.024 14:25:31 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:20:50.024 14:25:31 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:20:50.024 14:25:31 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:20:50.024 14:25:31 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:50.024 14:25:31 -- nvmf/common.sh@7 -- # uname -s 00:20:50.024 14:25:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.024 14:25:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.024 14:25:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.024 14:25:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.024 14:25:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.024 14:25:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.024 14:25:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.024 14:25:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.024 14:25:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.024 14:25:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.024 14:25:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:50.024 14:25:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:50.024 14:25:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.024 14:25:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.024 14:25:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:50.024 14:25:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.024 14:25:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:50.024 14:25:31 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.024 14:25:31 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.024 14:25:31 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.024 14:25:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.024 14:25:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.024 14:25:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.024 14:25:31 -- paths/export.sh@5 -- # export PATH 00:20:50.024 14:25:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.024 14:25:31 -- nvmf/common.sh@47 -- # : 0 00:20:50.024 14:25:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:50.024 14:25:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:50.024 14:25:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:50.024 14:25:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.024 14:25:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.024 14:25:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:50.024 14:25:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:50.024 14:25:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:50.024 14:25:31 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:20:50.024 14:25:31 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:20:50.024 14:25:31 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:20:50.024 14:25:31 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:20:50.024 14:25:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:50.024 14:25:31 -- common/autotest_common.sh@10 -- # set +x 00:20:50.024 14:25:31 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:20:50.024 14:25:31 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3214927 00:20:50.024 14:25:31 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:20:50.024 14:25:31 -- spdkcli/common.sh@34 -- # waitforlisten 3214927 00:20:50.024 14:25:31 -- common/autotest_common.sh@817 -- # '[' -z 3214927 ']' 00:20:50.024 14:25:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.024 14:25:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:50.024 14:25:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.024 14:25:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:50.024 14:25:31 -- common/autotest_common.sh@10 -- # set +x 00:20:50.024 [2024-04-26 14:25:31.429064] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:20:50.024 [2024-04-26 14:25:31.429172] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3214927 ] 00:20:50.024 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.024 [2024-04-26 14:25:31.491927] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:50.283 [2024-04-26 14:25:31.607385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.283 [2024-04-26 14:25:31.607390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.283 14:25:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:50.283 14:25:31 -- common/autotest_common.sh@850 -- # return 0 00:20:50.283 14:25:31 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:20:50.283 14:25:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:50.283 14:25:31 -- common/autotest_common.sh@10 -- # set +x 00:20:50.283 14:25:31 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:20:50.283 14:25:31 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:20:50.283 14:25:31 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:20:50.283 14:25:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:50.283 14:25:31 -- common/autotest_common.sh@10 -- # set +x 00:20:50.283 14:25:31 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:20:50.283 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:20:50.283 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:20:50.283 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:20:50.283 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:20:50.283 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:20:50.283 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:20:50.283 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:20:50.283 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:20:50.283 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:20:50.283 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:20:50.283 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:20:50.283 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:20:50.283 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:20:50.283 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:20:50.283 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:20:50.283 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:20:50.283 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:20:50.283 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:20:50.283 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:20:50.283 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:20:50.283 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:20:50.283 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:20:50.283 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:20:50.283 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:20:50.283 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:20:50.283 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:20:50.283 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:20:50.283 ' 00:20:50.850 [2024-04-26 14:25:32.149225] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:53.380 [2024-04-26 14:25:34.321538] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.315 [2024-04-26 14:25:35.565781] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:20:56.914 [2024-04-26 14:25:37.872928] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:20:58.295 [2024-04-26 14:25:39.851080] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:21:00.196 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:21:00.196 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:21:00.196 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:21:00.196 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:21:00.196 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:21:00.196 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:21:00.196 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:21:00.196 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:00.196 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:21:00.196 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:21:00.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:00.197 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:00.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:21:00.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:00.197 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:00.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:21:00.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:00.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:00.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:00.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:00.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:21:00.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:21:00.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:00.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:21:00.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:00.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:21:00.197 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:21:00.197 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:21:00.197 14:25:41 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:21:00.197 14:25:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:00.197 14:25:41 -- common/autotest_common.sh@10 -- # set +x 00:21:00.197 14:25:41 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:21:00.197 14:25:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:00.197 14:25:41 -- common/autotest_common.sh@10 -- # set +x 00:21:00.197 14:25:41 -- spdkcli/nvmf.sh@69 -- # check_match 00:21:00.197 14:25:41 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:21:00.455 14:25:41 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:21:00.455 14:25:41 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:21:00.455 14:25:41 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:21:00.455 14:25:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:00.455 14:25:41 -- common/autotest_common.sh@10 -- # set +x 00:21:00.455 14:25:41 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:21:00.455 14:25:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:00.455 14:25:41 -- common/autotest_common.sh@10 -- # set +x 00:21:00.455 14:25:41 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:21:00.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:21:00.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:00.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:21:00.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:21:00.455 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:21:00.455 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:21:00.456 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:00.456 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:21:00.456 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:21:00.456 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:21:00.456 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:21:00.456 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:21:00.456 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:21:00.456 ' 00:21:05.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:21:05.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:21:05.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:05.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:21:05.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:21:05.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:21:05.719 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:21:05.719 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:05.719 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:21:05.719 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:21:05.719 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:21:05.719 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:21:05.719 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:21:05.719 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:21:05.719 14:25:47 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:21:05.719 14:25:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:05.719 14:25:47 -- common/autotest_common.sh@10 -- # set +x 00:21:05.719 14:25:47 -- spdkcli/nvmf.sh@90 -- # killprocess 3214927 00:21:05.719 14:25:47 -- common/autotest_common.sh@936 -- # '[' -z 3214927 ']' 00:21:05.719 14:25:47 -- common/autotest_common.sh@940 -- # kill -0 3214927 00:21:05.719 14:25:47 -- common/autotest_common.sh@941 -- # uname 00:21:05.719 14:25:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:05.719 14:25:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3214927 00:21:05.977 14:25:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:05.977 14:25:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:05.977 14:25:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3214927' 00:21:05.977 killing process with pid 3214927 00:21:05.977 14:25:47 -- common/autotest_common.sh@955 -- # kill 3214927 00:21:05.977 [2024-04-26 14:25:47.298464] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:05.977 14:25:47 -- common/autotest_common.sh@960 -- # wait 3214927 00:21:05.977 14:25:47 -- spdkcli/nvmf.sh@1 -- # cleanup 00:21:05.977 14:25:47 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:21:05.977 14:25:47 -- spdkcli/common.sh@13 -- # '[' -n 3214927 ']' 00:21:05.977 14:25:47 -- spdkcli/common.sh@14 -- # killprocess 3214927 00:21:05.977 14:25:47 -- common/autotest_common.sh@936 -- # '[' -z 3214927 ']' 00:21:05.977 14:25:47 -- common/autotest_common.sh@940 -- # kill -0 3214927 00:21:05.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3214927) - No such process 00:21:05.977 14:25:47 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3214927 is not found' 00:21:05.977 Process with pid 3214927 is not found 00:21:05.977 14:25:47 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:21:05.977 14:25:47 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:21:05.977 14:25:47 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:21:05.977 00:21:05.977 real 0m16.211s 00:21:05.977 user 0m34.456s 00:21:05.977 sys 0m0.804s 00:21:05.977 14:25:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:05.977 14:25:47 -- common/autotest_common.sh@10 -- # set +x 00:21:05.977 ************************************ 00:21:05.977 END TEST spdkcli_nvmf_tcp 00:21:05.977 ************************************ 00:21:05.977 14:25:47 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:21:05.977 14:25:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:05.977 14:25:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:05.977 14:25:47 -- common/autotest_common.sh@10 -- # set +x 00:21:06.235 ************************************ 00:21:06.235 START TEST nvmf_identify_passthru 00:21:06.235 ************************************ 00:21:06.235 14:25:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:21:06.235 * Looking for test storage... 00:21:06.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:06.235 14:25:47 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:06.235 14:25:47 -- nvmf/common.sh@7 -- # uname -s 00:21:06.235 14:25:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.235 14:25:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.235 14:25:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.235 14:25:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.235 14:25:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.235 14:25:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.235 14:25:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.235 14:25:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.235 14:25:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.235 14:25:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.235 14:25:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:06.235 14:25:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:21:06.235 14:25:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.235 14:25:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.235 14:25:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:06.235 14:25:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.235 14:25:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:06.235 14:25:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.235 14:25:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.235 14:25:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.235 14:25:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.235 14:25:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.235 14:25:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.235 14:25:47 -- paths/export.sh@5 -- # export PATH 00:21:06.235 14:25:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.235 14:25:47 -- nvmf/common.sh@47 -- # : 0 00:21:06.235 14:25:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:06.236 14:25:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:06.236 14:25:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.236 14:25:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.236 14:25:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.236 14:25:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:06.236 14:25:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:06.236 14:25:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:06.236 14:25:47 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:06.236 14:25:47 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.236 14:25:47 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.236 14:25:47 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.236 14:25:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.236 14:25:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.236 14:25:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.236 14:25:47 -- paths/export.sh@5 -- # export PATH 00:21:06.236 14:25:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.236 14:25:47 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:21:06.236 14:25:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:06.236 14:25:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.236 14:25:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:06.236 14:25:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:06.236 14:25:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:06.236 14:25:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.236 14:25:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:06.236 14:25:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.236 14:25:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:06.236 14:25:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:06.236 14:25:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:06.236 14:25:47 -- common/autotest_common.sh@10 -- # set +x 00:21:08.150 14:25:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:08.150 14:25:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:08.150 14:25:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:08.150 14:25:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:08.150 14:25:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:08.150 14:25:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:08.150 14:25:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:08.150 14:25:49 -- nvmf/common.sh@295 -- # net_devs=() 00:21:08.150 14:25:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:08.150 14:25:49 -- nvmf/common.sh@296 -- # e810=() 00:21:08.150 14:25:49 -- nvmf/common.sh@296 -- # local -ga e810 00:21:08.150 14:25:49 -- nvmf/common.sh@297 -- # x722=() 00:21:08.150 14:25:49 -- nvmf/common.sh@297 -- # local -ga x722 00:21:08.150 14:25:49 -- nvmf/common.sh@298 -- # mlx=() 00:21:08.150 14:25:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:08.150 14:25:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.150 14:25:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.150 14:25:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.150 14:25:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.150 14:25:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.150 14:25:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.150 14:25:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.150 14:25:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.150 14:25:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.150 14:25:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.150 14:25:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.150 14:25:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:08.150 14:25:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:08.150 14:25:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:08.150 14:25:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:08.150 14:25:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:08.150 14:25:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:08.150 14:25:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.151 14:25:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:21:08.151 Found 0000:08:00.0 (0x8086 - 0x159b) 00:21:08.151 14:25:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.151 14:25:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.151 14:25:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.151 14:25:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.151 14:25:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.151 14:25:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:08.151 14:25:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:21:08.151 Found 0000:08:00.1 (0x8086 - 0x159b) 00:21:08.151 14:25:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:08.151 14:25:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:08.151 14:25:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.151 14:25:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.151 14:25:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:08.151 14:25:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:08.151 14:25:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:08.151 14:25:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:08.151 14:25:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.151 14:25:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.151 14:25:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:08.151 14:25:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.151 14:25:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:21:08.151 Found net devices under 0000:08:00.0: cvl_0_0 00:21:08.151 14:25:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.151 14:25:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:08.151 14:25:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.151 14:25:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:08.151 14:25:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.151 14:25:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:21:08.151 Found net devices under 0000:08:00.1: cvl_0_1 00:21:08.151 14:25:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.151 14:25:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:08.151 14:25:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:08.151 14:25:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:08.151 14:25:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:08.151 14:25:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:08.151 14:25:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.151 14:25:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.151 14:25:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:08.151 14:25:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:08.151 14:25:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:08.151 14:25:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:08.151 14:25:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:08.151 14:25:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:08.151 14:25:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.151 14:25:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:08.151 14:25:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:08.151 14:25:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:08.151 14:25:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:08.151 14:25:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:08.151 14:25:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:08.151 14:25:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:08.151 14:25:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:08.151 14:25:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:08.151 14:25:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:08.151 14:25:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:08.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:21:08.151 00:21:08.151 --- 10.0.0.2 ping statistics --- 00:21:08.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.151 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:21:08.151 14:25:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:08.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:21:08.151 00:21:08.151 --- 10.0.0.1 ping statistics --- 00:21:08.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.151 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:21:08.151 14:25:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.151 14:25:49 -- nvmf/common.sh@411 -- # return 0 00:21:08.151 14:25:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:08.151 14:25:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.151 14:25:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:08.151 14:25:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:08.151 14:25:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.151 14:25:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:08.151 14:25:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:08.151 14:25:49 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:21:08.151 14:25:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:08.151 14:25:49 -- common/autotest_common.sh@10 -- # set +x 00:21:08.151 14:25:49 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:21:08.151 14:25:49 -- common/autotest_common.sh@1510 -- # bdfs=() 00:21:08.151 14:25:49 -- common/autotest_common.sh@1510 -- # local bdfs 00:21:08.151 14:25:49 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:21:08.151 14:25:49 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:21:08.151 14:25:49 -- common/autotest_common.sh@1499 -- # bdfs=() 00:21:08.151 14:25:49 -- common/autotest_common.sh@1499 -- # local bdfs 00:21:08.151 14:25:49 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:08.151 14:25:49 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:08.151 14:25:49 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:21:08.151 14:25:49 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:21:08.151 14:25:49 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:84:00.0 00:21:08.151 14:25:49 -- common/autotest_common.sh@1513 -- # echo 0000:84:00.0 00:21:08.151 14:25:49 -- target/identify_passthru.sh@16 -- # bdf=0000:84:00.0 00:21:08.151 14:25:49 -- target/identify_passthru.sh@17 -- # '[' -z 0000:84:00.0 ']' 00:21:08.151 14:25:49 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:21:08.151 14:25:49 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:21:08.151 14:25:49 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:21:08.151 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.343 14:25:53 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ8275016S1P0FGN 00:21:12.343 14:25:53 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:21:12.343 14:25:53 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:21:12.343 14:25:53 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:21:12.343 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.526 14:25:57 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:21:16.526 14:25:57 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:21:16.526 14:25:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:16.526 14:25:57 -- common/autotest_common.sh@10 -- # set +x 00:21:16.526 14:25:57 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:21:16.526 14:25:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:16.526 14:25:57 -- common/autotest_common.sh@10 -- # set +x 00:21:16.526 14:25:57 -- target/identify_passthru.sh@31 -- # nvmfpid=3218495 00:21:16.526 14:25:57 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:16.526 14:25:57 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:16.526 14:25:57 -- target/identify_passthru.sh@35 -- # waitforlisten 3218495 00:21:16.526 14:25:57 -- common/autotest_common.sh@817 -- # '[' -z 3218495 ']' 00:21:16.526 14:25:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.526 14:25:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:16.526 14:25:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.526 14:25:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:16.526 14:25:57 -- common/autotest_common.sh@10 -- # set +x 00:21:16.526 [2024-04-26 14:25:57.983923] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:21:16.526 [2024-04-26 14:25:57.984027] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.526 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.526 [2024-04-26 14:25:58.050081] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:16.785 [2024-04-26 14:25:58.168476] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.785 [2024-04-26 14:25:58.168538] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.785 [2024-04-26 14:25:58.168554] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.785 [2024-04-26 14:25:58.168567] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.785 [2024-04-26 14:25:58.168579] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.785 [2024-04-26 14:25:58.168661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.785 [2024-04-26 14:25:58.168702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.785 [2024-04-26 14:25:58.168783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:16.785 [2024-04-26 14:25:58.168788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.785 14:25:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:16.785 14:25:58 -- common/autotest_common.sh@850 -- # return 0 00:21:16.785 14:25:58 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:21:16.785 14:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.785 14:25:58 -- common/autotest_common.sh@10 -- # set +x 00:21:16.785 INFO: Log level set to 20 00:21:16.785 INFO: Requests: 00:21:16.785 { 00:21:16.785 "jsonrpc": "2.0", 00:21:16.785 "method": "nvmf_set_config", 00:21:16.785 "id": 1, 00:21:16.785 "params": { 00:21:16.785 "admin_cmd_passthru": { 00:21:16.785 "identify_ctrlr": true 00:21:16.785 } 00:21:16.785 } 00:21:16.785 } 00:21:16.785 00:21:16.785 INFO: response: 00:21:16.785 { 00:21:16.785 "jsonrpc": "2.0", 00:21:16.785 "id": 1, 00:21:16.785 "result": true 00:21:16.785 } 00:21:16.785 00:21:16.785 14:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.785 14:25:58 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:21:16.785 14:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.785 14:25:58 -- common/autotest_common.sh@10 -- # set +x 00:21:16.785 INFO: Setting log level to 20 00:21:16.785 INFO: Setting log level to 20 00:21:16.785 INFO: Log level set to 20 00:21:16.785 INFO: Log level set to 20 00:21:16.785 INFO: Requests: 00:21:16.785 { 00:21:16.785 "jsonrpc": "2.0", 00:21:16.785 "method": "framework_start_init", 00:21:16.785 "id": 1 00:21:16.785 } 00:21:16.785 00:21:16.785 INFO: Requests: 00:21:16.785 { 00:21:16.785 "jsonrpc": "2.0", 00:21:16.785 "method": "framework_start_init", 00:21:16.785 "id": 1 00:21:16.785 } 00:21:16.785 00:21:16.785 [2024-04-26 14:25:58.330736] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:21:16.785 INFO: response: 00:21:16.785 { 00:21:16.785 "jsonrpc": "2.0", 00:21:16.785 "id": 1, 00:21:16.785 "result": true 00:21:16.785 } 00:21:16.785 00:21:16.785 INFO: response: 00:21:16.785 { 00:21:16.785 "jsonrpc": "2.0", 00:21:16.785 "id": 1, 00:21:16.785 "result": true 00:21:16.785 } 00:21:16.785 00:21:16.785 14:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.785 14:25:58 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:16.785 14:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.785 14:25:58 -- common/autotest_common.sh@10 -- # set +x 00:21:16.785 INFO: Setting log level to 40 00:21:16.785 INFO: Setting log level to 40 00:21:16.785 INFO: Setting log level to 40 00:21:16.785 [2024-04-26 14:25:58.340655] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.785 14:25:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.785 14:25:58 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:21:16.785 14:25:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:16.785 14:25:58 -- common/autotest_common.sh@10 -- # set +x 00:21:17.043 14:25:58 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:84:00.0 00:21:17.043 14:25:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.043 14:25:58 -- common/autotest_common.sh@10 -- # set +x 00:21:20.322 Nvme0n1 00:21:20.322 14:26:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:20.322 14:26:01 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:21:20.322 14:26:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:20.322 14:26:01 -- common/autotest_common.sh@10 -- # set +x 00:21:20.322 14:26:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:20.322 14:26:01 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:20.322 14:26:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:20.322 14:26:01 -- common/autotest_common.sh@10 -- # set +x 00:21:20.322 14:26:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:20.322 14:26:01 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:20.322 14:26:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:20.322 14:26:01 -- common/autotest_common.sh@10 -- # set +x 00:21:20.323 [2024-04-26 14:26:01.217053] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.323 14:26:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:20.323 14:26:01 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:21:20.323 14:26:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:20.323 14:26:01 -- common/autotest_common.sh@10 -- # set +x 00:21:20.323 [2024-04-26 14:26:01.224763] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:20.323 [ 00:21:20.323 { 00:21:20.323 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:20.323 "subtype": "Discovery", 00:21:20.323 "listen_addresses": [], 00:21:20.323 "allow_any_host": true, 00:21:20.323 "hosts": [] 00:21:20.323 }, 00:21:20.323 { 00:21:20.323 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.323 "subtype": "NVMe", 00:21:20.323 "listen_addresses": [ 00:21:20.323 { 00:21:20.323 "transport": "TCP", 00:21:20.323 "trtype": "TCP", 00:21:20.323 "adrfam": "IPv4", 00:21:20.323 "traddr": "10.0.0.2", 00:21:20.323 "trsvcid": "4420" 00:21:20.323 } 00:21:20.323 ], 00:21:20.323 "allow_any_host": true, 00:21:20.323 "hosts": [], 00:21:20.323 "serial_number": "SPDK00000000000001", 00:21:20.323 "model_number": "SPDK bdev Controller", 00:21:20.323 "max_namespaces": 1, 00:21:20.323 "min_cntlid": 1, 00:21:20.323 "max_cntlid": 65519, 00:21:20.323 "namespaces": [ 00:21:20.323 { 00:21:20.323 "nsid": 1, 00:21:20.323 "bdev_name": "Nvme0n1", 00:21:20.323 "name": "Nvme0n1", 00:21:20.323 "nguid": "D50C40967C4E42F0BC3C8569587B5C1F", 00:21:20.323 "uuid": "d50c4096-7c4e-42f0-bc3c-8569587b5c1f" 00:21:20.323 } 00:21:20.323 ] 00:21:20.323 } 00:21:20.323 ] 00:21:20.323 14:26:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:20.323 14:26:01 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:20.323 14:26:01 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:21:20.323 14:26:01 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:21:20.323 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.323 14:26:01 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ8275016S1P0FGN 00:21:20.323 14:26:01 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:20.323 14:26:01 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:21:20.323 14:26:01 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:21:20.323 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.323 14:26:01 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:21:20.323 14:26:01 -- target/identify_passthru.sh@63 -- # '[' PHLJ8275016S1P0FGN '!=' PHLJ8275016S1P0FGN ']' 00:21:20.323 14:26:01 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:21:20.323 14:26:01 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:20.323 14:26:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:20.323 14:26:01 -- common/autotest_common.sh@10 -- # set +x 00:21:20.323 14:26:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:20.323 14:26:01 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:21:20.323 14:26:01 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:21:20.323 14:26:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:20.323 14:26:01 -- nvmf/common.sh@117 -- # sync 00:21:20.323 14:26:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:20.323 14:26:01 -- nvmf/common.sh@120 -- # set +e 00:21:20.323 14:26:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:20.323 14:26:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:20.323 rmmod nvme_tcp 00:21:20.323 rmmod nvme_fabrics 00:21:20.323 rmmod nvme_keyring 00:21:20.323 14:26:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:20.323 14:26:01 -- nvmf/common.sh@124 -- # set -e 00:21:20.323 14:26:01 -- nvmf/common.sh@125 -- # return 0 00:21:20.323 14:26:01 -- nvmf/common.sh@478 -- # '[' -n 3218495 ']' 00:21:20.323 14:26:01 -- nvmf/common.sh@479 -- # killprocess 3218495 00:21:20.323 14:26:01 -- common/autotest_common.sh@936 -- # '[' -z 3218495 ']' 00:21:20.323 14:26:01 -- common/autotest_common.sh@940 -- # kill -0 3218495 00:21:20.323 14:26:01 -- common/autotest_common.sh@941 -- # uname 00:21:20.323 14:26:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:20.323 14:26:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3218495 00:21:20.323 14:26:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:20.323 14:26:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:20.323 14:26:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3218495' 00:21:20.323 killing process with pid 3218495 00:21:20.323 14:26:01 -- common/autotest_common.sh@955 -- # kill 3218495 00:21:20.323 [2024-04-26 14:26:01.710866] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:20.323 14:26:01 -- common/autotest_common.sh@960 -- # wait 3218495 00:21:21.696 14:26:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:21.696 14:26:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:21.696 14:26:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:21.696 14:26:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:21.696 14:26:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:21.696 14:26:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.956 14:26:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:21.956 14:26:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.861 14:26:05 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:23.861 00:21:23.861 real 0m17.655s 00:21:23.861 user 0m26.713s 00:21:23.861 sys 0m2.007s 00:21:23.861 14:26:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:23.861 14:26:05 -- common/autotest_common.sh@10 -- # set +x 00:21:23.861 ************************************ 00:21:23.861 END TEST nvmf_identify_passthru 00:21:23.861 ************************************ 00:21:23.861 14:26:05 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:21:23.861 14:26:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:23.861 14:26:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:23.861 14:26:05 -- common/autotest_common.sh@10 -- # set +x 00:21:24.120 ************************************ 00:21:24.120 START TEST nvmf_dif 00:21:24.120 ************************************ 00:21:24.121 14:26:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:21:24.121 * Looking for test storage... 00:21:24.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:24.121 14:26:05 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:24.121 14:26:05 -- nvmf/common.sh@7 -- # uname -s 00:21:24.121 14:26:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.121 14:26:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.121 14:26:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.121 14:26:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.121 14:26:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:24.121 14:26:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:24.121 14:26:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.121 14:26:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:24.121 14:26:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.121 14:26:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:24.121 14:26:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:24.121 14:26:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:21:24.121 14:26:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.121 14:26:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:24.121 14:26:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:24.121 14:26:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.121 14:26:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:24.121 14:26:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.121 14:26:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.121 14:26:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.121 14:26:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.121 14:26:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.121 14:26:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.121 14:26:05 -- paths/export.sh@5 -- # export PATH 00:21:24.121 14:26:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.121 14:26:05 -- nvmf/common.sh@47 -- # : 0 00:21:24.121 14:26:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:24.121 14:26:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:24.121 14:26:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:24.121 14:26:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.121 14:26:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.121 14:26:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:24.121 14:26:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:24.121 14:26:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:24.121 14:26:05 -- target/dif.sh@15 -- # NULL_META=16 00:21:24.121 14:26:05 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:21:24.121 14:26:05 -- target/dif.sh@15 -- # NULL_SIZE=64 00:21:24.121 14:26:05 -- target/dif.sh@15 -- # NULL_DIF=1 00:21:24.121 14:26:05 -- target/dif.sh@135 -- # nvmftestinit 00:21:24.121 14:26:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:24.121 14:26:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.121 14:26:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:24.121 14:26:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:24.121 14:26:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:24.121 14:26:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.121 14:26:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:24.121 14:26:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.121 14:26:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:24.121 14:26:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:24.121 14:26:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:24.121 14:26:05 -- common/autotest_common.sh@10 -- # set +x 00:21:25.498 14:26:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:25.498 14:26:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:25.757 14:26:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:25.757 14:26:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:25.757 14:26:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:25.757 14:26:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:25.757 14:26:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:25.757 14:26:07 -- nvmf/common.sh@295 -- # net_devs=() 00:21:25.757 14:26:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:25.757 14:26:07 -- nvmf/common.sh@296 -- # e810=() 00:21:25.757 14:26:07 -- nvmf/common.sh@296 -- # local -ga e810 00:21:25.757 14:26:07 -- nvmf/common.sh@297 -- # x722=() 00:21:25.757 14:26:07 -- nvmf/common.sh@297 -- # local -ga x722 00:21:25.757 14:26:07 -- nvmf/common.sh@298 -- # mlx=() 00:21:25.757 14:26:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:25.757 14:26:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:25.757 14:26:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:25.757 14:26:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:25.757 14:26:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:25.757 14:26:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:25.757 14:26:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:25.757 14:26:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:25.757 14:26:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:25.757 14:26:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:25.757 14:26:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:25.757 14:26:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:25.757 14:26:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:25.757 14:26:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:25.757 14:26:07 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:25.757 14:26:07 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:25.757 14:26:07 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:25.757 14:26:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:25.757 14:26:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:25.757 14:26:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:21:25.757 Found 0000:08:00.0 (0x8086 - 0x159b) 00:21:25.757 14:26:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:25.757 14:26:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:25.757 14:26:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.757 14:26:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.757 14:26:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:25.757 14:26:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:25.758 14:26:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:21:25.758 Found 0000:08:00.1 (0x8086 - 0x159b) 00:21:25.758 14:26:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:25.758 14:26:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:25.758 14:26:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.758 14:26:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.758 14:26:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:25.758 14:26:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:25.758 14:26:07 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:25.758 14:26:07 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:25.758 14:26:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:25.758 14:26:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.758 14:26:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:25.758 14:26:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.758 14:26:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:21:25.758 Found net devices under 0000:08:00.0: cvl_0_0 00:21:25.758 14:26:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.758 14:26:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:25.758 14:26:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.758 14:26:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:25.758 14:26:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.758 14:26:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:21:25.758 Found net devices under 0000:08:00.1: cvl_0_1 00:21:25.758 14:26:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.758 14:26:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:25.758 14:26:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:25.758 14:26:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:25.758 14:26:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:25.758 14:26:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:25.758 14:26:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:25.758 14:26:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:25.758 14:26:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:25.758 14:26:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:25.758 14:26:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:25.758 14:26:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:25.758 14:26:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:25.758 14:26:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:25.758 14:26:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:25.758 14:26:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:25.758 14:26:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:25.758 14:26:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:25.758 14:26:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:25.758 14:26:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:25.758 14:26:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:25.758 14:26:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:25.758 14:26:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:25.758 14:26:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:25.758 14:26:07 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:25.758 14:26:07 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:25.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:25.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:21:25.758 00:21:25.758 --- 10.0.0.2 ping statistics --- 00:21:25.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.758 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:21:25.758 14:26:07 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:25.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:25.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:21:25.758 00:21:25.758 --- 10.0.0.1 ping statistics --- 00:21:25.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.758 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:21:25.758 14:26:07 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:25.758 14:26:07 -- nvmf/common.sh@411 -- # return 0 00:21:25.758 14:26:07 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:21:25.758 14:26:07 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:21:26.691 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:21:26.691 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:21:26.691 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:21:26.691 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:21:26.691 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:21:26.691 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:21:26.691 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:21:26.691 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:21:26.691 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:21:26.691 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:21:26.692 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:21:26.692 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:21:26.692 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:21:26.692 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:21:26.692 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:21:26.692 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:21:26.692 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:21:26.949 14:26:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.949 14:26:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:26.949 14:26:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:26.949 14:26:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.949 14:26:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:26.949 14:26:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:26.949 14:26:08 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:21:26.949 14:26:08 -- target/dif.sh@137 -- # nvmfappstart 00:21:26.949 14:26:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:26.949 14:26:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:26.949 14:26:08 -- common/autotest_common.sh@10 -- # set +x 00:21:26.949 14:26:08 -- nvmf/common.sh@470 -- # nvmfpid=3221002 00:21:26.949 14:26:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:26.949 14:26:08 -- nvmf/common.sh@471 -- # waitforlisten 3221002 00:21:26.949 14:26:08 -- common/autotest_common.sh@817 -- # '[' -z 3221002 ']' 00:21:26.949 14:26:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.949 14:26:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:26.949 14:26:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.949 14:26:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:26.949 14:26:08 -- common/autotest_common.sh@10 -- # set +x 00:21:26.949 [2024-04-26 14:26:08.345855] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:21:26.949 [2024-04-26 14:26:08.345944] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.949 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.949 [2024-04-26 14:26:08.410670] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.207 [2024-04-26 14:26:08.526877] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.207 [2024-04-26 14:26:08.526928] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.207 [2024-04-26 14:26:08.526953] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.207 [2024-04-26 14:26:08.526974] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.207 [2024-04-26 14:26:08.526993] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.207 [2024-04-26 14:26:08.527036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.207 14:26:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:27.207 14:26:08 -- common/autotest_common.sh@850 -- # return 0 00:21:27.207 14:26:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:27.207 14:26:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:27.207 14:26:08 -- common/autotest_common.sh@10 -- # set +x 00:21:27.207 14:26:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.207 14:26:08 -- target/dif.sh@139 -- # create_transport 00:21:27.207 14:26:08 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:21:27.207 14:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.207 14:26:08 -- common/autotest_common.sh@10 -- # set +x 00:21:27.207 [2024-04-26 14:26:08.667088] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.207 14:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.207 14:26:08 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:21:27.207 14:26:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:27.207 14:26:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:27.207 14:26:08 -- common/autotest_common.sh@10 -- # set +x 00:21:27.466 ************************************ 00:21:27.466 START TEST fio_dif_1_default 00:21:27.466 ************************************ 00:21:27.466 14:26:08 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:21:27.466 14:26:08 -- target/dif.sh@86 -- # create_subsystems 0 00:21:27.466 14:26:08 -- target/dif.sh@28 -- # local sub 00:21:27.466 14:26:08 -- target/dif.sh@30 -- # for sub in "$@" 00:21:27.466 14:26:08 -- target/dif.sh@31 -- # create_subsystem 0 00:21:27.466 14:26:08 -- target/dif.sh@18 -- # local sub_id=0 00:21:27.466 14:26:08 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:27.466 14:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.466 14:26:08 -- common/autotest_common.sh@10 -- # set +x 00:21:27.466 bdev_null0 00:21:27.466 14:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.466 14:26:08 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:27.466 14:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.466 14:26:08 -- common/autotest_common.sh@10 -- # set +x 00:21:27.466 14:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.466 14:26:08 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:27.466 14:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.466 14:26:08 -- common/autotest_common.sh@10 -- # set +x 00:21:27.466 14:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.466 14:26:08 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:27.466 14:26:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.466 14:26:08 -- common/autotest_common.sh@10 -- # set +x 00:21:27.466 [2024-04-26 14:26:08.815586] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.466 14:26:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:27.466 14:26:08 -- target/dif.sh@87 -- # fio /dev/fd/62 00:21:27.466 14:26:08 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:21:27.466 14:26:08 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:27.466 14:26:08 -- nvmf/common.sh@521 -- # config=() 00:21:27.466 14:26:08 -- nvmf/common.sh@521 -- # local subsystem config 00:21:27.466 14:26:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:27.466 14:26:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:27.466 { 00:21:27.466 "params": { 00:21:27.466 "name": "Nvme$subsystem", 00:21:27.466 "trtype": "$TEST_TRANSPORT", 00:21:27.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.466 "adrfam": "ipv4", 00:21:27.466 "trsvcid": "$NVMF_PORT", 00:21:27.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.466 "hdgst": ${hdgst:-false}, 00:21:27.466 "ddgst": ${ddgst:-false} 00:21:27.466 }, 00:21:27.466 "method": "bdev_nvme_attach_controller" 00:21:27.466 } 00:21:27.466 EOF 00:21:27.466 )") 00:21:27.466 14:26:08 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:27.466 14:26:08 -- target/dif.sh@82 -- # gen_fio_conf 00:21:27.466 14:26:08 -- target/dif.sh@54 -- # local file 00:21:27.466 14:26:08 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:27.466 14:26:08 -- target/dif.sh@56 -- # cat 00:21:27.466 14:26:08 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:27.466 14:26:08 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:27.466 14:26:08 -- nvmf/common.sh@543 -- # cat 00:21:27.466 14:26:08 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:27.466 14:26:08 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:27.466 14:26:08 -- common/autotest_common.sh@1327 -- # shift 00:21:27.466 14:26:08 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:27.466 14:26:08 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:27.466 14:26:08 -- target/dif.sh@72 -- # (( file = 1 )) 00:21:27.466 14:26:08 -- target/dif.sh@72 -- # (( file <= files )) 00:21:27.466 14:26:08 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:27.466 14:26:08 -- nvmf/common.sh@545 -- # jq . 00:21:27.466 14:26:08 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:27.466 14:26:08 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:27.466 14:26:08 -- nvmf/common.sh@546 -- # IFS=, 00:21:27.466 14:26:08 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:27.466 "params": { 00:21:27.466 "name": "Nvme0", 00:21:27.466 "trtype": "tcp", 00:21:27.466 "traddr": "10.0.0.2", 00:21:27.466 "adrfam": "ipv4", 00:21:27.466 "trsvcid": "4420", 00:21:27.466 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:27.466 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:27.466 "hdgst": false, 00:21:27.466 "ddgst": false 00:21:27.466 }, 00:21:27.466 "method": "bdev_nvme_attach_controller" 00:21:27.466 }' 00:21:27.466 14:26:08 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:27.466 14:26:08 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:27.466 14:26:08 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:27.466 14:26:08 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:27.466 14:26:08 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:27.466 14:26:08 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:27.466 14:26:08 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:27.466 14:26:08 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:27.466 14:26:08 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:21:27.466 14:26:08 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:27.724 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:27.724 fio-3.35 00:21:27.724 Starting 1 thread 00:21:27.724 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.923 00:21:39.923 filename0: (groupid=0, jobs=1): err= 0: pid=3221214: Fri Apr 26 14:26:19 2024 00:21:39.923 read: IOPS=189, BW=758KiB/s (777kB/s)(7584KiB/10001msec) 00:21:39.923 slat (nsec): min=7352, max=73588, avg=9372.53, stdev=3551.04 00:21:39.923 clat (usec): min=612, max=45357, avg=21069.97, stdev=20256.94 00:21:39.923 lat (usec): min=620, max=45393, avg=21079.34, stdev=20257.21 00:21:39.923 clat percentiles (usec): 00:21:39.923 | 1.00th=[ 652], 5.00th=[ 685], 10.00th=[ 709], 20.00th=[ 734], 00:21:39.923 | 30.00th=[ 750], 40.00th=[ 758], 50.00th=[41157], 60.00th=[41157], 00:21:39.923 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:21:39.923 | 99.00th=[41157], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:21:39.923 | 99.99th=[45351] 00:21:39.923 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=759.58, stdev=25.78, samples=19 00:21:39.923 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:21:39.923 lat (usec) : 750=32.54%, 1000=17.25% 00:21:39.923 lat (msec) : 50=50.21% 00:21:39.923 cpu : usr=91.13%, sys=8.52%, ctx=27, majf=0, minf=250 00:21:39.923 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:39.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:39.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:39.923 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:39.923 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:39.923 00:21:39.923 Run status group 0 (all jobs): 00:21:39.923 READ: bw=758KiB/s (777kB/s), 758KiB/s-758KiB/s (777kB/s-777kB/s), io=7584KiB (7766kB), run=10001-10001msec 00:21:39.923 14:26:19 -- target/dif.sh@88 -- # destroy_subsystems 0 00:21:39.923 14:26:19 -- target/dif.sh@43 -- # local sub 00:21:39.923 14:26:19 -- target/dif.sh@45 -- # for sub in "$@" 00:21:39.923 14:26:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:39.923 14:26:19 -- target/dif.sh@36 -- # local sub_id=0 00:21:39.923 14:26:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:39.923 14:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.923 14:26:19 -- common/autotest_common.sh@10 -- # set +x 00:21:39.923 14:26:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.923 14:26:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:39.923 14:26:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.923 14:26:19 -- common/autotest_common.sh@10 -- # set +x 00:21:39.923 14:26:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.923 00:21:39.923 real 0m11.196s 00:21:39.923 user 0m10.168s 00:21:39.923 sys 0m1.084s 00:21:39.924 14:26:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:39.924 14:26:19 -- common/autotest_common.sh@10 -- # set +x 00:21:39.924 ************************************ 00:21:39.924 END TEST fio_dif_1_default 00:21:39.924 ************************************ 00:21:39.924 14:26:20 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:21:39.924 14:26:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:39.924 14:26:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:39.924 14:26:20 -- common/autotest_common.sh@10 -- # set +x 00:21:39.924 ************************************ 00:21:39.924 START TEST fio_dif_1_multi_subsystems 00:21:39.924 ************************************ 00:21:39.924 14:26:20 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:21:39.924 14:26:20 -- target/dif.sh@92 -- # local files=1 00:21:39.924 14:26:20 -- target/dif.sh@94 -- # create_subsystems 0 1 00:21:39.924 14:26:20 -- target/dif.sh@28 -- # local sub 00:21:39.924 14:26:20 -- target/dif.sh@30 -- # for sub in "$@" 00:21:39.924 14:26:20 -- target/dif.sh@31 -- # create_subsystem 0 00:21:39.924 14:26:20 -- target/dif.sh@18 -- # local sub_id=0 00:21:39.924 14:26:20 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:39.924 14:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.924 14:26:20 -- common/autotest_common.sh@10 -- # set +x 00:21:39.924 bdev_null0 00:21:39.924 14:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.924 14:26:20 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:39.924 14:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.924 14:26:20 -- common/autotest_common.sh@10 -- # set +x 00:21:39.924 14:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.924 14:26:20 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:39.924 14:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.924 14:26:20 -- common/autotest_common.sh@10 -- # set +x 00:21:39.924 14:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.924 14:26:20 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:39.924 14:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.924 14:26:20 -- common/autotest_common.sh@10 -- # set +x 00:21:39.924 [2024-04-26 14:26:20.153229] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.924 14:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.924 14:26:20 -- target/dif.sh@30 -- # for sub in "$@" 00:21:39.924 14:26:20 -- target/dif.sh@31 -- # create_subsystem 1 00:21:39.924 14:26:20 -- target/dif.sh@18 -- # local sub_id=1 00:21:39.924 14:26:20 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:39.924 14:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.924 14:26:20 -- common/autotest_common.sh@10 -- # set +x 00:21:39.924 bdev_null1 00:21:39.924 14:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.924 14:26:20 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:39.924 14:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.924 14:26:20 -- common/autotest_common.sh@10 -- # set +x 00:21:39.924 14:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.924 14:26:20 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:39.924 14:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.924 14:26:20 -- common/autotest_common.sh@10 -- # set +x 00:21:39.924 14:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.924 14:26:20 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:39.924 14:26:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.924 14:26:20 -- common/autotest_common.sh@10 -- # set +x 00:21:39.924 14:26:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.924 14:26:20 -- target/dif.sh@95 -- # fio /dev/fd/62 00:21:39.924 14:26:20 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:21:39.924 14:26:20 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:39.924 14:26:20 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:39.924 14:26:20 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:39.924 14:26:20 -- nvmf/common.sh@521 -- # config=() 00:21:39.924 14:26:20 -- target/dif.sh@82 -- # gen_fio_conf 00:21:39.924 14:26:20 -- nvmf/common.sh@521 -- # local subsystem config 00:21:39.924 14:26:20 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:39.924 14:26:20 -- target/dif.sh@54 -- # local file 00:21:39.924 14:26:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:39.924 14:26:20 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:39.924 14:26:20 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:39.924 14:26:20 -- target/dif.sh@56 -- # cat 00:21:39.924 14:26:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:39.924 { 00:21:39.924 "params": { 00:21:39.924 "name": "Nvme$subsystem", 00:21:39.924 "trtype": "$TEST_TRANSPORT", 00:21:39.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.924 "adrfam": "ipv4", 00:21:39.924 "trsvcid": "$NVMF_PORT", 00:21:39.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.924 "hdgst": ${hdgst:-false}, 00:21:39.924 "ddgst": ${ddgst:-false} 00:21:39.924 }, 00:21:39.924 "method": "bdev_nvme_attach_controller" 00:21:39.924 } 00:21:39.924 EOF 00:21:39.924 )") 00:21:39.924 14:26:20 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:39.924 14:26:20 -- common/autotest_common.sh@1327 -- # shift 00:21:39.924 14:26:20 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:39.924 14:26:20 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:39.924 14:26:20 -- nvmf/common.sh@543 -- # cat 00:21:39.924 14:26:20 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:39.924 14:26:20 -- target/dif.sh@72 -- # (( file = 1 )) 00:21:39.924 14:26:20 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:39.924 14:26:20 -- target/dif.sh@72 -- # (( file <= files )) 00:21:39.924 14:26:20 -- target/dif.sh@73 -- # cat 00:21:39.924 14:26:20 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:39.924 14:26:20 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:39.924 14:26:20 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:39.924 { 00:21:39.924 "params": { 00:21:39.924 "name": "Nvme$subsystem", 00:21:39.924 "trtype": "$TEST_TRANSPORT", 00:21:39.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.924 "adrfam": "ipv4", 00:21:39.924 "trsvcid": "$NVMF_PORT", 00:21:39.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.924 "hdgst": ${hdgst:-false}, 00:21:39.924 "ddgst": ${ddgst:-false} 00:21:39.924 }, 00:21:39.924 "method": "bdev_nvme_attach_controller" 00:21:39.924 } 00:21:39.924 EOF 00:21:39.924 )") 00:21:39.924 14:26:20 -- target/dif.sh@72 -- # (( file++ )) 00:21:39.924 14:26:20 -- target/dif.sh@72 -- # (( file <= files )) 00:21:39.924 14:26:20 -- nvmf/common.sh@543 -- # cat 00:21:39.924 14:26:20 -- nvmf/common.sh@545 -- # jq . 00:21:39.924 14:26:20 -- nvmf/common.sh@546 -- # IFS=, 00:21:39.924 14:26:20 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:39.924 "params": { 00:21:39.924 "name": "Nvme0", 00:21:39.924 "trtype": "tcp", 00:21:39.924 "traddr": "10.0.0.2", 00:21:39.924 "adrfam": "ipv4", 00:21:39.924 "trsvcid": "4420", 00:21:39.924 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:39.924 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:39.924 "hdgst": false, 00:21:39.924 "ddgst": false 00:21:39.924 }, 00:21:39.924 "method": "bdev_nvme_attach_controller" 00:21:39.924 },{ 00:21:39.924 "params": { 00:21:39.924 "name": "Nvme1", 00:21:39.924 "trtype": "tcp", 00:21:39.924 "traddr": "10.0.0.2", 00:21:39.924 "adrfam": "ipv4", 00:21:39.924 "trsvcid": "4420", 00:21:39.924 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.924 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:39.924 "hdgst": false, 00:21:39.924 "ddgst": false 00:21:39.924 }, 00:21:39.924 "method": "bdev_nvme_attach_controller" 00:21:39.924 }' 00:21:39.924 14:26:20 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:39.924 14:26:20 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:39.924 14:26:20 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:39.924 14:26:20 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:39.924 14:26:20 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:39.924 14:26:20 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:39.924 14:26:20 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:39.924 14:26:20 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:39.924 14:26:20 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:21:39.924 14:26:20 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:39.924 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:39.924 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:39.924 fio-3.35 00:21:39.924 Starting 2 threads 00:21:39.924 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.893 00:21:49.893 filename0: (groupid=0, jobs=1): err= 0: pid=3222300: Fri Apr 26 14:26:31 2024 00:21:49.893 read: IOPS=96, BW=386KiB/s (396kB/s)(3872KiB/10025msec) 00:21:49.893 slat (nsec): min=7562, max=42644, avg=9197.78, stdev=2215.61 00:21:49.893 clat (usec): min=40815, max=42618, avg=41396.48, stdev=504.58 00:21:49.893 lat (usec): min=40824, max=42660, avg=41405.67, stdev=504.75 00:21:49.893 clat percentiles (usec): 00:21:49.893 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:21:49.893 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:21:49.893 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:21:49.893 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:21:49.893 | 99.99th=[42730] 00:21:49.893 bw ( KiB/s): min= 384, max= 416, per=33.79%, avg=385.60, stdev= 7.16, samples=20 00:21:49.893 iops : min= 96, max= 104, avg=96.40, stdev= 1.79, samples=20 00:21:49.893 lat (msec) : 50=100.00% 00:21:49.893 cpu : usr=94.05%, sys=5.56%, ctx=11, majf=0, minf=42 00:21:49.893 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:49.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.893 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:49.893 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:49.893 filename1: (groupid=0, jobs=1): err= 0: pid=3222301: Fri Apr 26 14:26:31 2024 00:21:49.893 read: IOPS=188, BW=755KiB/s (773kB/s)(7552KiB/10003msec) 00:21:49.893 slat (nsec): min=7529, max=54587, avg=9323.23, stdev=2957.28 00:21:49.893 clat (usec): min=611, max=42733, avg=21164.08, stdev=20204.82 00:21:49.893 lat (usec): min=619, max=42743, avg=21173.40, stdev=20204.98 00:21:49.893 clat percentiles (usec): 00:21:49.893 | 1.00th=[ 676], 5.00th=[ 717], 10.00th=[ 734], 20.00th=[ 758], 00:21:49.893 | 30.00th=[ 783], 40.00th=[ 873], 50.00th=[41157], 60.00th=[41157], 00:21:49.893 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:21:49.893 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:21:49.893 | 99.99th=[42730] 00:21:49.893 bw ( KiB/s): min= 672, max= 768, per=66.08%, avg=753.60, stdev=30.22, samples=20 00:21:49.893 iops : min= 168, max= 192, avg=188.40, stdev= 7.56, samples=20 00:21:49.893 lat (usec) : 750=16.74%, 1000=32.20% 00:21:49.893 lat (msec) : 2=0.64%, 50=50.42% 00:21:49.893 cpu : usr=94.02%, sys=5.60%, ctx=15, majf=0, minf=189 00:21:49.893 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:49.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.893 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:49.894 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:49.894 00:21:49.894 Run status group 0 (all jobs): 00:21:49.894 READ: bw=1140KiB/s (1167kB/s), 386KiB/s-755KiB/s (396kB/s-773kB/s), io=11.2MiB (11.7MB), run=10003-10025msec 00:21:50.153 14:26:31 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:21:50.153 14:26:31 -- target/dif.sh@43 -- # local sub 00:21:50.153 14:26:31 -- target/dif.sh@45 -- # for sub in "$@" 00:21:50.153 14:26:31 -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:50.153 14:26:31 -- target/dif.sh@36 -- # local sub_id=0 00:21:50.153 14:26:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:50.153 14:26:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.153 14:26:31 -- common/autotest_common.sh@10 -- # set +x 00:21:50.153 14:26:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.153 14:26:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:50.153 14:26:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.153 14:26:31 -- common/autotest_common.sh@10 -- # set +x 00:21:50.153 14:26:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.153 14:26:31 -- target/dif.sh@45 -- # for sub in "$@" 00:21:50.153 14:26:31 -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:50.153 14:26:31 -- target/dif.sh@36 -- # local sub_id=1 00:21:50.153 14:26:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:50.153 14:26:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.153 14:26:31 -- common/autotest_common.sh@10 -- # set +x 00:21:50.153 14:26:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.153 14:26:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:50.153 14:26:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.153 14:26:31 -- common/autotest_common.sh@10 -- # set +x 00:21:50.153 14:26:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.153 00:21:50.153 real 0m11.402s 00:21:50.153 user 0m20.029s 00:21:50.153 sys 0m1.401s 00:21:50.153 14:26:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:50.153 14:26:31 -- common/autotest_common.sh@10 -- # set +x 00:21:50.153 ************************************ 00:21:50.153 END TEST fio_dif_1_multi_subsystems 00:21:50.153 ************************************ 00:21:50.153 14:26:31 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:21:50.153 14:26:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:50.153 14:26:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:50.153 14:26:31 -- common/autotest_common.sh@10 -- # set +x 00:21:50.153 ************************************ 00:21:50.153 START TEST fio_dif_rand_params 00:21:50.153 ************************************ 00:21:50.153 14:26:31 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:21:50.153 14:26:31 -- target/dif.sh@100 -- # local NULL_DIF 00:21:50.153 14:26:31 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:21:50.153 14:26:31 -- target/dif.sh@103 -- # NULL_DIF=3 00:21:50.153 14:26:31 -- target/dif.sh@103 -- # bs=128k 00:21:50.153 14:26:31 -- target/dif.sh@103 -- # numjobs=3 00:21:50.153 14:26:31 -- target/dif.sh@103 -- # iodepth=3 00:21:50.153 14:26:31 -- target/dif.sh@103 -- # runtime=5 00:21:50.153 14:26:31 -- target/dif.sh@105 -- # create_subsystems 0 00:21:50.153 14:26:31 -- target/dif.sh@28 -- # local sub 00:21:50.153 14:26:31 -- target/dif.sh@30 -- # for sub in "$@" 00:21:50.153 14:26:31 -- target/dif.sh@31 -- # create_subsystem 0 00:21:50.153 14:26:31 -- target/dif.sh@18 -- # local sub_id=0 00:21:50.153 14:26:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:50.153 14:26:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.153 14:26:31 -- common/autotest_common.sh@10 -- # set +x 00:21:50.153 bdev_null0 00:21:50.153 14:26:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.153 14:26:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:50.153 14:26:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.153 14:26:31 -- common/autotest_common.sh@10 -- # set +x 00:21:50.153 14:26:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.153 14:26:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:50.153 14:26:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.153 14:26:31 -- common/autotest_common.sh@10 -- # set +x 00:21:50.153 14:26:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.153 14:26:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:50.153 14:26:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.153 14:26:31 -- common/autotest_common.sh@10 -- # set +x 00:21:50.153 [2024-04-26 14:26:31.697697] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.153 14:26:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.153 14:26:31 -- target/dif.sh@106 -- # fio /dev/fd/62 00:21:50.153 14:26:31 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:21:50.153 14:26:31 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:50.153 14:26:31 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:50.153 14:26:31 -- nvmf/common.sh@521 -- # config=() 00:21:50.153 14:26:31 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:50.153 14:26:31 -- target/dif.sh@82 -- # gen_fio_conf 00:21:50.153 14:26:31 -- nvmf/common.sh@521 -- # local subsystem config 00:21:50.153 14:26:31 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:50.153 14:26:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:50.153 14:26:31 -- target/dif.sh@54 -- # local file 00:21:50.153 14:26:31 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:50.153 14:26:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:50.153 { 00:21:50.153 "params": { 00:21:50.153 "name": "Nvme$subsystem", 00:21:50.153 "trtype": "$TEST_TRANSPORT", 00:21:50.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.153 "adrfam": "ipv4", 00:21:50.153 "trsvcid": "$NVMF_PORT", 00:21:50.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.153 "hdgst": ${hdgst:-false}, 00:21:50.153 "ddgst": ${ddgst:-false} 00:21:50.153 }, 00:21:50.153 "method": "bdev_nvme_attach_controller" 00:21:50.153 } 00:21:50.153 EOF 00:21:50.153 )") 00:21:50.153 14:26:31 -- target/dif.sh@56 -- # cat 00:21:50.153 14:26:31 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:50.153 14:26:31 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:50.153 14:26:31 -- common/autotest_common.sh@1327 -- # shift 00:21:50.153 14:26:31 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:50.153 14:26:31 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:50.153 14:26:31 -- nvmf/common.sh@543 -- # cat 00:21:50.153 14:26:31 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:50.153 14:26:31 -- target/dif.sh@72 -- # (( file = 1 )) 00:21:50.153 14:26:31 -- target/dif.sh@72 -- # (( file <= files )) 00:21:50.153 14:26:31 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:50.153 14:26:31 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:50.153 14:26:31 -- nvmf/common.sh@545 -- # jq . 00:21:50.153 14:26:31 -- nvmf/common.sh@546 -- # IFS=, 00:21:50.153 14:26:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:50.153 "params": { 00:21:50.154 "name": "Nvme0", 00:21:50.154 "trtype": "tcp", 00:21:50.154 "traddr": "10.0.0.2", 00:21:50.154 "adrfam": "ipv4", 00:21:50.154 "trsvcid": "4420", 00:21:50.154 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:50.154 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:50.154 "hdgst": false, 00:21:50.154 "ddgst": false 00:21:50.154 }, 00:21:50.154 "method": "bdev_nvme_attach_controller" 00:21:50.154 }' 00:21:50.412 14:26:31 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:50.412 14:26:31 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:50.412 14:26:31 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:50.412 14:26:31 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:50.412 14:26:31 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:50.412 14:26:31 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:50.412 14:26:31 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:50.412 14:26:31 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:50.412 14:26:31 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:21:50.412 14:26:31 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:50.412 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:50.412 ... 00:21:50.412 fio-3.35 00:21:50.412 Starting 3 threads 00:21:50.670 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.226 00:21:57.226 filename0: (groupid=0, jobs=1): err= 0: pid=3223456: Fri Apr 26 14:26:37 2024 00:21:57.226 read: IOPS=184, BW=23.0MiB/s (24.1MB/s)(116MiB/5032msec) 00:21:57.226 slat (nsec): min=7769, max=29476, avg=12593.06, stdev=2708.76 00:21:57.226 clat (usec): min=4899, max=93819, avg=16263.88, stdev=13448.42 00:21:57.226 lat (usec): min=4910, max=93830, avg=16276.47, stdev=13448.30 00:21:57.226 clat percentiles (usec): 00:21:57.226 | 1.00th=[ 5604], 5.00th=[ 6521], 10.00th=[ 8225], 20.00th=[ 9241], 00:21:57.226 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[11731], 60.00th=[13435], 00:21:57.226 | 70.00th=[14353], 80.00th=[15401], 90.00th=[46924], 95.00th=[51119], 00:21:57.226 | 99.00th=[54789], 99.50th=[55313], 99.90th=[93848], 99.95th=[93848], 00:21:57.226 | 99.99th=[93848] 00:21:57.226 bw ( KiB/s): min=18176, max=29952, per=34.26%, avg=23659.30, stdev=3980.42, samples=10 00:21:57.226 iops : min= 142, max= 234, avg=184.80, stdev=31.09, samples=10 00:21:57.226 lat (msec) : 10=35.06%, 20=51.56%, 50=7.23%, 100=6.15% 00:21:57.226 cpu : usr=93.14%, sys=6.44%, ctx=10, majf=0, minf=121 00:21:57.226 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:57.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.226 issued rwts: total=927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.226 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:57.226 filename0: (groupid=0, jobs=1): err= 0: pid=3223457: Fri Apr 26 14:26:37 2024 00:21:57.226 read: IOPS=181, BW=22.7MiB/s (23.8MB/s)(114MiB/5005msec) 00:21:57.226 slat (nsec): min=6787, max=40130, avg=14991.58, stdev=5297.56 00:21:57.226 clat (usec): min=5056, max=87169, avg=16511.03, stdev=13819.92 00:21:57.226 lat (usec): min=5069, max=87183, avg=16526.02, stdev=13819.54 00:21:57.226 clat percentiles (usec): 00:21:57.226 | 1.00th=[ 5800], 5.00th=[ 6652], 10.00th=[ 8455], 20.00th=[ 9634], 00:21:57.226 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11994], 60.00th=[12911], 00:21:57.226 | 70.00th=[13829], 80.00th=[14615], 90.00th=[47973], 95.00th=[52167], 00:21:57.226 | 99.00th=[55313], 99.50th=[55837], 99.90th=[87557], 99.95th=[87557], 00:21:57.226 | 99.99th=[87557] 00:21:57.226 bw ( KiB/s): min=14848, max=29696, per=33.59%, avg=23193.60, stdev=5012.49, samples=10 00:21:57.226 iops : min= 116, max= 232, avg=181.20, stdev=39.16, samples=10 00:21:57.226 lat (msec) : 10=29.85%, 20=57.05%, 50=5.84%, 100=7.27% 00:21:57.226 cpu : usr=92.51%, sys=6.91%, ctx=63, majf=0, minf=92 00:21:57.226 IO depths : 1=2.2%, 2=97.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:57.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.226 issued rwts: total=908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.226 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:57.226 filename0: (groupid=0, jobs=1): err= 0: pid=3223458: Fri Apr 26 14:26:37 2024 00:21:57.226 read: IOPS=175, BW=21.9MiB/s (23.0MB/s)(111MiB/5040msec) 00:21:57.226 slat (nsec): min=7769, max=28345, avg=12095.30, stdev=2559.93 00:21:57.226 clat (usec): min=5013, max=92980, avg=17084.97, stdev=14619.67 00:21:57.226 lat (usec): min=5025, max=92992, avg=17097.07, stdev=14619.83 00:21:57.226 clat percentiles (usec): 00:21:57.226 | 1.00th=[ 5669], 5.00th=[ 6194], 10.00th=[ 6980], 20.00th=[ 9372], 00:21:57.226 | 30.00th=[10028], 40.00th=[10814], 50.00th=[12125], 60.00th=[13698], 00:21:57.226 | 70.00th=[14615], 80.00th=[16188], 90.00th=[49021], 95.00th=[53216], 00:21:57.226 | 99.00th=[56361], 99.50th=[58983], 99.90th=[92799], 99.95th=[92799], 00:21:57.226 | 99.99th=[92799] 00:21:57.226 bw ( KiB/s): min=16128, max=31488, per=32.66%, avg=22556.90, stdev=4902.93, samples=10 00:21:57.226 iops : min= 126, max= 246, avg=176.20, stdev=38.34, samples=10 00:21:57.226 lat (msec) : 10=29.64%, 20=56.79%, 50=4.86%, 100=8.71% 00:21:57.226 cpu : usr=93.33%, sys=6.25%, ctx=10, majf=0, minf=76 00:21:57.226 IO depths : 1=3.2%, 2=96.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:57.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.226 issued rwts: total=884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.226 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:57.226 00:21:57.226 Run status group 0 (all jobs): 00:21:57.226 READ: bw=67.4MiB/s (70.7MB/s), 21.9MiB/s-23.0MiB/s (23.0MB/s-24.1MB/s), io=340MiB (356MB), run=5005-5040msec 00:21:57.226 14:26:37 -- target/dif.sh@107 -- # destroy_subsystems 0 00:21:57.226 14:26:37 -- target/dif.sh@43 -- # local sub 00:21:57.226 14:26:37 -- target/dif.sh@45 -- # for sub in "$@" 00:21:57.226 14:26:37 -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:57.226 14:26:37 -- target/dif.sh@36 -- # local sub_id=0 00:21:57.226 14:26:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:57.226 14:26:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.226 14:26:37 -- common/autotest_common.sh@10 -- # set +x 00:21:57.226 14:26:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.226 14:26:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:57.226 14:26:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.226 14:26:37 -- common/autotest_common.sh@10 -- # set +x 00:21:57.226 14:26:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.226 14:26:37 -- target/dif.sh@109 -- # NULL_DIF=2 00:21:57.226 14:26:37 -- target/dif.sh@109 -- # bs=4k 00:21:57.226 14:26:37 -- target/dif.sh@109 -- # numjobs=8 00:21:57.226 14:26:37 -- target/dif.sh@109 -- # iodepth=16 00:21:57.226 14:26:37 -- target/dif.sh@109 -- # runtime= 00:21:57.226 14:26:37 -- target/dif.sh@109 -- # files=2 00:21:57.226 14:26:37 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:21:57.226 14:26:37 -- target/dif.sh@28 -- # local sub 00:21:57.226 14:26:37 -- target/dif.sh@30 -- # for sub in "$@" 00:21:57.226 14:26:37 -- target/dif.sh@31 -- # create_subsystem 0 00:21:57.227 14:26:37 -- target/dif.sh@18 -- # local sub_id=0 00:21:57.227 14:26:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:21:57.227 14:26:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.227 14:26:37 -- common/autotest_common.sh@10 -- # set +x 00:21:57.227 bdev_null0 00:21:57.227 14:26:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.227 14:26:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:57.227 14:26:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.227 14:26:37 -- common/autotest_common.sh@10 -- # set +x 00:21:57.227 14:26:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.227 14:26:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:57.227 14:26:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.227 14:26:37 -- common/autotest_common.sh@10 -- # set +x 00:21:57.227 14:26:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.227 14:26:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:57.227 14:26:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.227 14:26:37 -- common/autotest_common.sh@10 -- # set +x 00:21:57.227 [2024-04-26 14:26:37.768430] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.227 14:26:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.227 14:26:37 -- target/dif.sh@30 -- # for sub in "$@" 00:21:57.227 14:26:37 -- target/dif.sh@31 -- # create_subsystem 1 00:21:57.227 14:26:37 -- target/dif.sh@18 -- # local sub_id=1 00:21:57.227 14:26:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:21:57.227 14:26:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.227 14:26:37 -- common/autotest_common.sh@10 -- # set +x 00:21:57.227 bdev_null1 00:21:57.227 14:26:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.227 14:26:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:57.227 14:26:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.227 14:26:37 -- common/autotest_common.sh@10 -- # set +x 00:21:57.227 14:26:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.227 14:26:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:57.227 14:26:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.227 14:26:37 -- common/autotest_common.sh@10 -- # set +x 00:21:57.227 14:26:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.227 14:26:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:57.227 14:26:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.227 14:26:37 -- common/autotest_common.sh@10 -- # set +x 00:21:57.227 14:26:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.227 14:26:37 -- target/dif.sh@30 -- # for sub in "$@" 00:21:57.227 14:26:37 -- target/dif.sh@31 -- # create_subsystem 2 00:21:57.227 14:26:37 -- target/dif.sh@18 -- # local sub_id=2 00:21:57.227 14:26:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:21:57.227 14:26:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.227 14:26:37 -- common/autotest_common.sh@10 -- # set +x 00:21:57.227 bdev_null2 00:21:57.227 14:26:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.227 14:26:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:21:57.227 14:26:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.227 14:26:37 -- common/autotest_common.sh@10 -- # set +x 00:21:57.227 14:26:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.227 14:26:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:21:57.227 14:26:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.227 14:26:37 -- common/autotest_common.sh@10 -- # set +x 00:21:57.227 14:26:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.227 14:26:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:57.227 14:26:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:57.227 14:26:37 -- common/autotest_common.sh@10 -- # set +x 00:21:57.227 14:26:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:57.227 14:26:37 -- target/dif.sh@112 -- # fio /dev/fd/62 00:21:57.227 14:26:37 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:21:57.227 14:26:37 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:57.227 14:26:37 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:21:57.227 14:26:37 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:57.227 14:26:37 -- target/dif.sh@82 -- # gen_fio_conf 00:21:57.227 14:26:37 -- nvmf/common.sh@521 -- # config=() 00:21:57.227 14:26:37 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:57.227 14:26:37 -- nvmf/common.sh@521 -- # local subsystem config 00:21:57.227 14:26:37 -- target/dif.sh@54 -- # local file 00:21:57.227 14:26:37 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:57.227 14:26:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:57.227 14:26:37 -- target/dif.sh@56 -- # cat 00:21:57.227 14:26:37 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:57.227 14:26:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:57.227 { 00:21:57.227 "params": { 00:21:57.227 "name": "Nvme$subsystem", 00:21:57.227 "trtype": "$TEST_TRANSPORT", 00:21:57.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.227 "adrfam": "ipv4", 00:21:57.227 "trsvcid": "$NVMF_PORT", 00:21:57.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.227 "hdgst": ${hdgst:-false}, 00:21:57.227 "ddgst": ${ddgst:-false} 00:21:57.227 }, 00:21:57.227 "method": "bdev_nvme_attach_controller" 00:21:57.227 } 00:21:57.227 EOF 00:21:57.227 )") 00:21:57.227 14:26:37 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:57.227 14:26:37 -- common/autotest_common.sh@1327 -- # shift 00:21:57.227 14:26:37 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:57.227 14:26:37 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:57.227 14:26:37 -- nvmf/common.sh@543 -- # cat 00:21:57.227 14:26:37 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:57.227 14:26:37 -- target/dif.sh@72 -- # (( file = 1 )) 00:21:57.227 14:26:37 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:57.227 14:26:37 -- target/dif.sh@72 -- # (( file <= files )) 00:21:57.227 14:26:37 -- target/dif.sh@73 -- # cat 00:21:57.227 14:26:37 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:57.227 14:26:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:57.227 14:26:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:57.227 { 00:21:57.227 "params": { 00:21:57.227 "name": "Nvme$subsystem", 00:21:57.227 "trtype": "$TEST_TRANSPORT", 00:21:57.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.227 "adrfam": "ipv4", 00:21:57.227 "trsvcid": "$NVMF_PORT", 00:21:57.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.227 "hdgst": ${hdgst:-false}, 00:21:57.227 "ddgst": ${ddgst:-false} 00:21:57.227 }, 00:21:57.227 "method": "bdev_nvme_attach_controller" 00:21:57.227 } 00:21:57.227 EOF 00:21:57.227 )") 00:21:57.227 14:26:37 -- target/dif.sh@72 -- # (( file++ )) 00:21:57.227 14:26:37 -- target/dif.sh@72 -- # (( file <= files )) 00:21:57.227 14:26:37 -- target/dif.sh@73 -- # cat 00:21:57.227 14:26:37 -- nvmf/common.sh@543 -- # cat 00:21:57.227 14:26:37 -- target/dif.sh@72 -- # (( file++ )) 00:21:57.227 14:26:37 -- target/dif.sh@72 -- # (( file <= files )) 00:21:57.227 14:26:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:57.227 14:26:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:57.227 { 00:21:57.227 "params": { 00:21:57.227 "name": "Nvme$subsystem", 00:21:57.227 "trtype": "$TEST_TRANSPORT", 00:21:57.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.227 "adrfam": "ipv4", 00:21:57.227 "trsvcid": "$NVMF_PORT", 00:21:57.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.227 "hdgst": ${hdgst:-false}, 00:21:57.227 "ddgst": ${ddgst:-false} 00:21:57.227 }, 00:21:57.227 "method": "bdev_nvme_attach_controller" 00:21:57.227 } 00:21:57.227 EOF 00:21:57.227 )") 00:21:57.227 14:26:37 -- nvmf/common.sh@543 -- # cat 00:21:57.227 14:26:37 -- nvmf/common.sh@545 -- # jq . 00:21:57.227 14:26:37 -- nvmf/common.sh@546 -- # IFS=, 00:21:57.227 14:26:37 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:57.227 "params": { 00:21:57.227 "name": "Nvme0", 00:21:57.227 "trtype": "tcp", 00:21:57.227 "traddr": "10.0.0.2", 00:21:57.227 "adrfam": "ipv4", 00:21:57.227 "trsvcid": "4420", 00:21:57.227 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:57.227 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:57.227 "hdgst": false, 00:21:57.227 "ddgst": false 00:21:57.227 }, 00:21:57.227 "method": "bdev_nvme_attach_controller" 00:21:57.227 },{ 00:21:57.227 "params": { 00:21:57.227 "name": "Nvme1", 00:21:57.227 "trtype": "tcp", 00:21:57.227 "traddr": "10.0.0.2", 00:21:57.227 "adrfam": "ipv4", 00:21:57.227 "trsvcid": "4420", 00:21:57.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:57.227 "hdgst": false, 00:21:57.227 "ddgst": false 00:21:57.227 }, 00:21:57.227 "method": "bdev_nvme_attach_controller" 00:21:57.227 },{ 00:21:57.227 "params": { 00:21:57.227 "name": "Nvme2", 00:21:57.227 "trtype": "tcp", 00:21:57.227 "traddr": "10.0.0.2", 00:21:57.227 "adrfam": "ipv4", 00:21:57.227 "trsvcid": "4420", 00:21:57.227 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:57.227 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:57.227 "hdgst": false, 00:21:57.227 "ddgst": false 00:21:57.227 }, 00:21:57.227 "method": "bdev_nvme_attach_controller" 00:21:57.227 }' 00:21:57.227 14:26:37 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:57.227 14:26:37 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:57.228 14:26:37 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:57.228 14:26:37 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:21:57.228 14:26:37 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:57.228 14:26:37 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:57.228 14:26:37 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:57.228 14:26:37 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:57.228 14:26:37 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:21:57.228 14:26:37 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:57.228 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:57.228 ... 00:21:57.228 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:57.228 ... 00:21:57.228 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:57.228 ... 00:21:57.228 fio-3.35 00:21:57.228 Starting 24 threads 00:21:57.228 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.440 00:22:09.440 filename0: (groupid=0, jobs=1): err= 0: pid=3224024: Fri Apr 26 14:26:49 2024 00:22:09.440 read: IOPS=38, BW=153KiB/s (157kB/s)(1536KiB/10021msec) 00:22:09.440 slat (usec): min=14, max=159, avg=97.78, stdev=28.83 00:22:09.440 clat (msec): min=247, max=538, avg=416.67, stdev=43.85 00:22:09.440 lat (msec): min=247, max=538, avg=416.77, stdev=43.86 00:22:09.440 clat percentiles (msec): 00:22:09.440 | 1.00th=[ 264], 5.00th=[ 376], 10.00th=[ 397], 20.00th=[ 409], 00:22:09.440 | 30.00th=[ 414], 40.00th=[ 414], 50.00th=[ 418], 60.00th=[ 422], 00:22:09.440 | 70.00th=[ 422], 80.00th=[ 435], 90.00th=[ 451], 95.00th=[ 472], 00:22:09.440 | 99.00th=[ 514], 99.50th=[ 542], 99.90th=[ 542], 99.95th=[ 542], 00:22:09.440 | 99.99th=[ 542] 00:22:09.440 bw ( KiB/s): min= 128, max= 256, per=3.64%, avg=154.95, stdev=51.72, samples=19 00:22:09.440 iops : min= 32, max= 64, avg=38.74, stdev=12.93, samples=19 00:22:09.440 lat (msec) : 250=0.52%, 500=94.79%, 750=4.69% 00:22:09.440 cpu : usr=97.98%, sys=1.33%, ctx=79, majf=0, minf=27 00:22:09.440 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:22:09.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.440 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.440 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.440 filename0: (groupid=0, jobs=1): err= 0: pid=3224025: Fri Apr 26 14:26:49 2024 00:22:09.440 read: IOPS=53, BW=214KiB/s (219kB/s)(2176KiB/10158msec) 00:22:09.440 slat (usec): min=5, max=192, avg=72.17, stdev=34.75 00:22:09.440 clat (msec): min=142, max=542, avg=295.91, stdev=52.22 00:22:09.440 lat (msec): min=142, max=542, avg=295.99, stdev=52.23 00:22:09.440 clat percentiles (msec): 00:22:09.440 | 1.00th=[ 144], 5.00th=[ 268], 10.00th=[ 271], 20.00th=[ 271], 00:22:09.440 | 30.00th=[ 275], 40.00th=[ 284], 50.00th=[ 284], 60.00th=[ 288], 00:22:09.440 | 70.00th=[ 288], 80.00th=[ 300], 90.00th=[ 393], 95.00th=[ 414], 00:22:09.440 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 542], 99.95th=[ 542], 00:22:09.440 | 99.99th=[ 542] 00:22:09.440 bw ( KiB/s): min= 128, max= 256, per=4.98%, avg=211.20, stdev=56.29, samples=20 00:22:09.440 iops : min= 32, max= 64, avg=52.80, stdev=14.07, samples=20 00:22:09.440 lat (msec) : 250=2.94%, 500=96.69%, 750=0.37% 00:22:09.440 cpu : usr=98.06%, sys=1.17%, ctx=45, majf=0, minf=17 00:22:09.440 IO depths : 1=1.3%, 2=7.5%, 4=25.0%, 8=55.0%, 16=11.2%, 32=0.0%, >=64=0.0% 00:22:09.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.440 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.440 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.440 filename0: (groupid=0, jobs=1): err= 0: pid=3224026: Fri Apr 26 14:26:49 2024 00:22:09.440 read: IOPS=44, BW=177KiB/s (181kB/s)(1792KiB/10142msec) 00:22:09.440 slat (usec): min=10, max=173, avg=71.74, stdev=44.93 00:22:09.440 clat (msec): min=211, max=472, avg=361.57, stdev=72.91 00:22:09.440 lat (msec): min=211, max=472, avg=361.64, stdev=72.95 00:22:09.440 clat percentiles (msec): 00:22:09.440 | 1.00th=[ 213], 5.00th=[ 271], 10.00th=[ 271], 20.00th=[ 284], 00:22:09.440 | 30.00th=[ 288], 40.00th=[ 368], 50.00th=[ 401], 60.00th=[ 414], 00:22:09.440 | 70.00th=[ 418], 80.00th=[ 422], 90.00th=[ 430], 95.00th=[ 451], 00:22:09.440 | 99.00th=[ 472], 99.50th=[ 472], 99.90th=[ 472], 99.95th=[ 472], 00:22:09.440 | 99.99th=[ 472] 00:22:09.440 bw ( KiB/s): min= 128, max= 256, per=4.06%, avg=172.75, stdev=62.57, samples=20 00:22:09.440 iops : min= 32, max= 64, avg=43.15, stdev=15.59, samples=20 00:22:09.440 lat (msec) : 250=3.57%, 500=96.43% 00:22:09.440 cpu : usr=98.27%, sys=1.17%, ctx=28, majf=0, minf=12 00:22:09.440 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:22:09.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.440 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.440 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.440 filename0: (groupid=0, jobs=1): err= 0: pid=3224027: Fri Apr 26 14:26:49 2024 00:22:09.440 read: IOPS=58, BW=232KiB/s (238kB/s)(2360KiB/10160msec) 00:22:09.440 slat (usec): min=4, max=178, avg=19.46, stdev=23.57 00:22:09.440 clat (msec): min=143, max=348, avg=274.86, stdev=28.94 00:22:09.440 lat (msec): min=143, max=348, avg=274.88, stdev=28.94 00:22:09.440 clat percentiles (msec): 00:22:09.440 | 1.00th=[ 144], 5.00th=[ 213], 10.00th=[ 255], 20.00th=[ 268], 00:22:09.440 | 30.00th=[ 271], 40.00th=[ 279], 50.00th=[ 284], 60.00th=[ 288], 00:22:09.440 | 70.00th=[ 288], 80.00th=[ 288], 90.00th=[ 300], 95.00th=[ 300], 00:22:09.440 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 351], 99.95th=[ 351], 00:22:09.440 | 99.99th=[ 351] 00:22:09.440 bw ( KiB/s): min= 128, max= 256, per=5.41%, avg=229.50, stdev=46.45, samples=20 00:22:09.440 iops : min= 32, max= 64, avg=57.35, stdev=11.60, samples=20 00:22:09.440 lat (msec) : 250=7.80%, 500=92.20% 00:22:09.440 cpu : usr=98.04%, sys=1.34%, ctx=36, majf=0, minf=26 00:22:09.440 IO depths : 1=1.4%, 2=7.6%, 4=25.1%, 8=54.9%, 16=11.0%, 32=0.0%, >=64=0.0% 00:22:09.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.440 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.440 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.440 filename0: (groupid=0, jobs=1): err= 0: pid=3224028: Fri Apr 26 14:26:49 2024 00:22:09.440 read: IOPS=49, BW=199KiB/s (204kB/s)(2016KiB/10140msec) 00:22:09.440 slat (usec): min=9, max=140, avg=54.14, stdev=43.05 00:22:09.440 clat (msec): min=124, max=549, avg=318.96, stdev=73.13 00:22:09.440 lat (msec): min=124, max=549, avg=319.02, stdev=73.15 00:22:09.440 clat percentiles (msec): 00:22:09.440 | 1.00th=[ 125], 5.00th=[ 245], 10.00th=[ 255], 20.00th=[ 271], 00:22:09.440 | 30.00th=[ 279], 40.00th=[ 288], 50.00th=[ 296], 60.00th=[ 300], 00:22:09.440 | 70.00th=[ 368], 80.00th=[ 405], 90.00th=[ 426], 95.00th=[ 435], 00:22:09.440 | 99.00th=[ 468], 99.50th=[ 468], 99.90th=[ 550], 99.95th=[ 550], 00:22:09.440 | 99.99th=[ 550] 00:22:09.440 bw ( KiB/s): min= 127, max= 256, per=4.70%, avg=199.15, stdev=55.61, samples=20 00:22:09.440 iops : min= 31, max= 64, avg=49.75, stdev=13.95, samples=20 00:22:09.440 lat (msec) : 250=7.14%, 500=92.46%, 750=0.40% 00:22:09.440 cpu : usr=98.20%, sys=1.21%, ctx=36, majf=0, minf=24 00:22:09.440 IO depths : 1=2.0%, 2=4.6%, 4=13.7%, 8=69.0%, 16=10.7%, 32=0.0%, >=64=0.0% 00:22:09.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.440 complete : 0=0.0%, 4=90.7%, 8=3.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.440 issued rwts: total=504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.440 filename0: (groupid=0, jobs=1): err= 0: pid=3224029: Fri Apr 26 14:26:49 2024 00:22:09.440 read: IOPS=42, BW=170KiB/s (174kB/s)(1728KiB/10142msec) 00:22:09.440 slat (usec): min=10, max=165, avg=82.53, stdev=36.12 00:22:09.440 clat (msec): min=124, max=463, avg=374.92, stdev=85.75 00:22:09.440 lat (msec): min=124, max=463, avg=375.00, stdev=85.78 00:22:09.440 clat percentiles (msec): 00:22:09.440 | 1.00th=[ 126], 5.00th=[ 213], 10.00th=[ 247], 20.00th=[ 288], 00:22:09.440 | 30.00th=[ 393], 40.00th=[ 405], 50.00th=[ 418], 60.00th=[ 422], 00:22:09.440 | 70.00th=[ 426], 80.00th=[ 435], 90.00th=[ 435], 95.00th=[ 460], 00:22:09.440 | 99.00th=[ 464], 99.50th=[ 464], 99.90th=[ 464], 99.95th=[ 464], 00:22:09.440 | 99.99th=[ 464] 00:22:09.440 bw ( KiB/s): min= 127, max= 256, per=3.92%, avg=166.35, stdev=58.63, samples=20 00:22:09.440 iops : min= 31, max= 64, avg=41.55, stdev=14.68, samples=20 00:22:09.440 lat (msec) : 250=13.89%, 500=86.11% 00:22:09.440 cpu : usr=97.97%, sys=1.36%, ctx=46, majf=0, minf=26 00:22:09.440 IO depths : 1=5.1%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:22:09.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.440 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.440 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.440 filename0: (groupid=0, jobs=1): err= 0: pid=3224030: Fri Apr 26 14:26:49 2024 00:22:09.440 read: IOPS=38, BW=153KiB/s (157kB/s)(1536KiB/10030msec) 00:22:09.440 slat (usec): min=10, max=153, avg=23.68, stdev=18.16 00:22:09.440 clat (msec): min=266, max=628, avg=417.72, stdev=63.45 00:22:09.440 lat (msec): min=266, max=628, avg=417.75, stdev=63.45 00:22:09.440 clat percentiles (msec): 00:22:09.440 | 1.00th=[ 271], 5.00th=[ 279], 10.00th=[ 376], 20.00th=[ 397], 00:22:09.440 | 30.00th=[ 409], 40.00th=[ 414], 50.00th=[ 414], 60.00th=[ 418], 00:22:09.440 | 70.00th=[ 430], 80.00th=[ 447], 90.00th=[ 472], 95.00th=[ 550], 00:22:09.440 | 99.00th=[ 592], 99.50th=[ 625], 99.90th=[ 625], 99.95th=[ 625], 00:22:09.441 | 99.99th=[ 625] 00:22:09.441 bw ( KiB/s): min= 112, max= 256, per=3.64%, avg=154.89, stdev=50.08, samples=19 00:22:09.441 iops : min= 28, max= 64, avg=38.68, stdev=12.54, samples=19 00:22:09.441 lat (msec) : 500=90.62%, 750=9.38% 00:22:09.441 cpu : usr=98.03%, sys=1.35%, ctx=26, majf=0, minf=25 00:22:09.441 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:22:09.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.441 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.441 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.441 filename0: (groupid=0, jobs=1): err= 0: pid=3224031: Fri Apr 26 14:26:49 2024 00:22:09.441 read: IOPS=37, BW=152KiB/s (155kB/s)(1536KiB/10118msec) 00:22:09.441 slat (usec): min=10, max=134, avg=48.36, stdev=41.45 00:22:09.441 clat (msec): min=211, max=626, avg=421.17, stdev=67.06 00:22:09.441 lat (msec): min=211, max=626, avg=421.22, stdev=67.05 00:22:09.441 clat percentiles (msec): 00:22:09.441 | 1.00th=[ 211], 5.00th=[ 292], 10.00th=[ 393], 20.00th=[ 401], 00:22:09.441 | 30.00th=[ 409], 40.00th=[ 418], 50.00th=[ 422], 60.00th=[ 426], 00:22:09.441 | 70.00th=[ 435], 80.00th=[ 435], 90.00th=[ 460], 95.00th=[ 558], 00:22:09.441 | 99.00th=[ 625], 99.50th=[ 625], 99.90th=[ 625], 99.95th=[ 625], 00:22:09.441 | 99.99th=[ 625] 00:22:09.441 bw ( KiB/s): min= 128, max= 256, per=3.64%, avg=154.95, stdev=53.61, samples=19 00:22:09.441 iops : min= 32, max= 64, avg=38.74, stdev=13.40, samples=19 00:22:09.441 lat (msec) : 250=4.17%, 500=90.10%, 750=5.73% 00:22:09.441 cpu : usr=98.41%, sys=1.10%, ctx=14, majf=0, minf=30 00:22:09.441 IO depths : 1=4.7%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:22:09.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.441 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.441 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.441 filename1: (groupid=0, jobs=1): err= 0: pid=3224032: Fri Apr 26 14:26:49 2024 00:22:09.441 read: IOPS=54, BW=220KiB/s (225kB/s)(2232KiB/10155msec) 00:22:09.441 slat (usec): min=9, max=118, avg=23.25, stdev= 8.98 00:22:09.441 clat (msec): min=138, max=463, avg=289.51, stdev=51.83 00:22:09.441 lat (msec): min=138, max=463, avg=289.54, stdev=51.83 00:22:09.441 clat percentiles (msec): 00:22:09.441 | 1.00th=[ 140], 5.00th=[ 211], 10.00th=[ 253], 20.00th=[ 271], 00:22:09.441 | 30.00th=[ 275], 40.00th=[ 279], 50.00th=[ 284], 60.00th=[ 288], 00:22:09.441 | 70.00th=[ 292], 80.00th=[ 300], 90.00th=[ 351], 95.00th=[ 418], 00:22:09.441 | 99.00th=[ 430], 99.50th=[ 430], 99.90th=[ 464], 99.95th=[ 464], 00:22:09.441 | 99.99th=[ 464] 00:22:09.441 bw ( KiB/s): min= 128, max= 256, per=5.10%, avg=216.80, stdev=45.40, samples=20 00:22:09.441 iops : min= 32, max= 64, avg=54.20, stdev=11.35, samples=20 00:22:09.441 lat (msec) : 250=9.68%, 500=90.32% 00:22:09.441 cpu : usr=97.91%, sys=1.31%, ctx=13, majf=0, minf=22 00:22:09.441 IO depths : 1=0.7%, 2=2.3%, 4=10.8%, 8=74.2%, 16=12.0%, 32=0.0%, >=64=0.0% 00:22:09.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.441 complete : 0=0.0%, 4=89.9%, 8=4.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.441 issued rwts: total=558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.441 filename1: (groupid=0, jobs=1): err= 0: pid=3224033: Fri Apr 26 14:26:49 2024 00:22:09.441 read: IOPS=55, BW=221KiB/s (226kB/s)(2240KiB/10140msec) 00:22:09.441 slat (nsec): min=8654, max=79908, avg=17111.92, stdev=8463.96 00:22:09.441 clat (msec): min=217, max=431, avg=287.31, stdev=34.33 00:22:09.441 lat (msec): min=217, max=431, avg=287.32, stdev=34.33 00:22:09.441 clat percentiles (msec): 00:22:09.441 | 1.00th=[ 218], 5.00th=[ 247], 10.00th=[ 271], 20.00th=[ 271], 00:22:09.441 | 30.00th=[ 275], 40.00th=[ 279], 50.00th=[ 284], 60.00th=[ 288], 00:22:09.441 | 70.00th=[ 288], 80.00th=[ 288], 90.00th=[ 300], 95.00th=[ 388], 00:22:09.441 | 99.00th=[ 430], 99.50th=[ 430], 99.90th=[ 430], 99.95th=[ 430], 00:22:09.441 | 99.99th=[ 430] 00:22:09.441 bw ( KiB/s): min= 128, max= 256, per=5.12%, avg=217.55, stdev=51.72, samples=20 00:22:09.441 iops : min= 32, max= 64, avg=54.35, stdev=12.90, samples=20 00:22:09.441 lat (msec) : 250=5.36%, 500=94.64% 00:22:09.441 cpu : usr=98.43%, sys=1.09%, ctx=34, majf=0, minf=34 00:22:09.441 IO depths : 1=0.5%, 2=6.8%, 4=25.0%, 8=55.7%, 16=12.0%, 32=0.0%, >=64=0.0% 00:22:09.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.441 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.441 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.441 filename1: (groupid=0, jobs=1): err= 0: pid=3224034: Fri Apr 26 14:26:49 2024 00:22:09.441 read: IOPS=39, BW=158KiB/s (162kB/s)(1600KiB/10142msec) 00:22:09.441 slat (usec): min=19, max=155, avg=81.78, stdev=33.36 00:22:09.441 clat (msec): min=212, max=553, avg=404.97, stdev=64.89 00:22:09.441 lat (msec): min=212, max=553, avg=405.05, stdev=64.90 00:22:09.441 clat percentiles (msec): 00:22:09.441 | 1.00th=[ 213], 5.00th=[ 218], 10.00th=[ 292], 20.00th=[ 393], 00:22:09.441 | 30.00th=[ 409], 40.00th=[ 418], 50.00th=[ 422], 60.00th=[ 426], 00:22:09.441 | 70.00th=[ 435], 80.00th=[ 435], 90.00th=[ 439], 95.00th=[ 464], 00:22:09.441 | 99.00th=[ 550], 99.50th=[ 550], 99.90th=[ 550], 99.95th=[ 550], 00:22:09.441 | 99.99th=[ 550] 00:22:09.441 bw ( KiB/s): min= 127, max= 256, per=3.61%, avg=153.55, stdev=50.73, samples=20 00:22:09.441 iops : min= 31, max= 64, avg=38.35, stdev=12.70, samples=20 00:22:09.441 lat (msec) : 250=8.00%, 500=90.00%, 750=2.00% 00:22:09.441 cpu : usr=98.14%, sys=1.16%, ctx=92, majf=0, minf=22 00:22:09.441 IO depths : 1=4.8%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:22:09.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.441 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.441 issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.441 filename1: (groupid=0, jobs=1): err= 0: pid=3224035: Fri Apr 26 14:26:49 2024 00:22:09.441 read: IOPS=44, BW=177KiB/s (182kB/s)(1792KiB/10110msec) 00:22:09.441 slat (usec): min=8, max=147, avg=52.53, stdev=44.04 00:22:09.441 clat (msec): min=263, max=619, avg=357.87, stdev=74.68 00:22:09.441 lat (msec): min=263, max=619, avg=357.92, stdev=74.71 00:22:09.441 clat percentiles (msec): 00:22:09.441 | 1.00th=[ 271], 5.00th=[ 271], 10.00th=[ 284], 20.00th=[ 284], 00:22:09.441 | 30.00th=[ 288], 40.00th=[ 300], 50.00th=[ 376], 60.00th=[ 401], 00:22:09.441 | 70.00th=[ 414], 80.00th=[ 418], 90.00th=[ 430], 95.00th=[ 451], 00:22:09.441 | 99.00th=[ 558], 99.50th=[ 575], 99.90th=[ 617], 99.95th=[ 617], 00:22:09.441 | 99.99th=[ 617] 00:22:09.441 bw ( KiB/s): min= 128, max= 256, per=4.27%, avg=181.89, stdev=63.38, samples=19 00:22:09.441 iops : min= 32, max= 64, avg=45.47, stdev=15.84, samples=19 00:22:09.441 lat (msec) : 500=95.54%, 750=4.46% 00:22:09.441 cpu : usr=98.24%, sys=1.22%, ctx=44, majf=0, minf=21 00:22:09.441 IO depths : 1=3.6%, 2=9.8%, 4=25.0%, 8=52.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:22:09.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.441 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.441 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.441 filename1: (groupid=0, jobs=1): err= 0: pid=3224036: Fri Apr 26 14:26:49 2024 00:22:09.441 read: IOPS=37, BW=152KiB/s (155kB/s)(1536KiB/10133msec) 00:22:09.441 slat (usec): min=5, max=144, avg=85.86, stdev=33.93 00:22:09.441 clat (msec): min=265, max=598, avg=418.30, stdev=61.63 00:22:09.441 lat (msec): min=265, max=599, avg=418.39, stdev=61.63 00:22:09.441 clat percentiles (msec): 00:22:09.441 | 1.00th=[ 271], 5.00th=[ 288], 10.00th=[ 376], 20.00th=[ 397], 00:22:09.441 | 30.00th=[ 409], 40.00th=[ 414], 50.00th=[ 418], 60.00th=[ 422], 00:22:09.441 | 70.00th=[ 426], 80.00th=[ 447], 90.00th=[ 472], 95.00th=[ 535], 00:22:09.441 | 99.00th=[ 575], 99.50th=[ 600], 99.90th=[ 600], 99.95th=[ 600], 00:22:09.441 | 99.99th=[ 600] 00:22:09.441 bw ( KiB/s): min= 112, max= 256, per=3.64%, avg=154.95, stdev=48.02, samples=19 00:22:09.441 iops : min= 28, max= 64, avg=38.74, stdev=12.00, samples=19 00:22:09.441 lat (msec) : 500=91.15%, 750=8.85% 00:22:09.441 cpu : usr=97.97%, sys=1.32%, ctx=44, majf=0, minf=24 00:22:09.441 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:22:09.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.441 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.441 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.441 filename1: (groupid=0, jobs=1): err= 0: pid=3224037: Fri Apr 26 14:26:49 2024 00:22:09.441 read: IOPS=53, BW=213KiB/s (218kB/s)(2160KiB/10159msec) 00:22:09.441 slat (usec): min=6, max=132, avg=77.51, stdev=30.05 00:22:09.441 clat (msec): min=143, max=524, avg=299.03, stdev=58.32 00:22:09.441 lat (msec): min=143, max=524, avg=299.11, stdev=58.33 00:22:09.441 clat percentiles (msec): 00:22:09.441 | 1.00th=[ 144], 5.00th=[ 243], 10.00th=[ 255], 20.00th=[ 271], 00:22:09.441 | 30.00th=[ 275], 40.00th=[ 284], 50.00th=[ 288], 60.00th=[ 288], 00:22:09.441 | 70.00th=[ 292], 80.00th=[ 317], 90.00th=[ 401], 95.00th=[ 418], 00:22:09.441 | 99.00th=[ 439], 99.50th=[ 493], 99.90th=[ 523], 99.95th=[ 523], 00:22:09.441 | 99.99th=[ 523] 00:22:09.441 bw ( KiB/s): min= 128, max= 256, per=4.94%, avg=209.60, stdev=51.10, samples=20 00:22:09.441 iops : min= 32, max= 64, avg=52.40, stdev=12.77, samples=20 00:22:09.441 lat (msec) : 250=7.78%, 500=91.85%, 750=0.37% 00:22:09.441 cpu : usr=98.38%, sys=1.22%, ctx=13, majf=0, minf=26 00:22:09.441 IO depths : 1=0.7%, 2=3.7%, 4=15.0%, 8=68.7%, 16=11.9%, 32=0.0%, >=64=0.0% 00:22:09.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.441 complete : 0=0.0%, 4=91.2%, 8=3.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.441 issued rwts: total=540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.441 filename1: (groupid=0, jobs=1): err= 0: pid=3224038: Fri Apr 26 14:26:49 2024 00:22:09.441 read: IOPS=37, BW=152KiB/s (156kB/s)(1536KiB/10110msec) 00:22:09.441 slat (usec): min=11, max=121, avg=39.41, stdev=27.37 00:22:09.441 clat (msec): min=211, max=589, avg=420.97, stdev=45.67 00:22:09.441 lat (msec): min=211, max=589, avg=421.01, stdev=45.68 00:22:09.441 clat percentiles (msec): 00:22:09.441 | 1.00th=[ 271], 5.00th=[ 359], 10.00th=[ 376], 20.00th=[ 405], 00:22:09.442 | 30.00th=[ 405], 40.00th=[ 414], 50.00th=[ 414], 60.00th=[ 426], 00:22:09.442 | 70.00th=[ 430], 80.00th=[ 435], 90.00th=[ 472], 95.00th=[ 514], 00:22:09.442 | 99.00th=[ 575], 99.50th=[ 592], 99.90th=[ 592], 99.95th=[ 592], 00:22:09.442 | 99.99th=[ 592] 00:22:09.442 bw ( KiB/s): min= 112, max= 256, per=3.64%, avg=154.95, stdev=53.88, samples=19 00:22:09.442 iops : min= 28, max= 64, avg=38.74, stdev=13.47, samples=19 00:22:09.442 lat (msec) : 250=0.52%, 500=92.71%, 750=6.77% 00:22:09.442 cpu : usr=98.46%, sys=1.16%, ctx=13, majf=0, minf=29 00:22:09.442 IO depths : 1=4.9%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:22:09.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.442 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.442 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.442 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.442 filename1: (groupid=0, jobs=1): err= 0: pid=3224039: Fri Apr 26 14:26:49 2024 00:22:09.442 read: IOPS=39, BW=158KiB/s (162kB/s)(1600KiB/10142msec) 00:22:09.442 slat (usec): min=19, max=136, avg=96.49, stdev=21.25 00:22:09.442 clat (msec): min=211, max=584, avg=404.89, stdev=61.50 00:22:09.442 lat (msec): min=211, max=584, avg=404.99, stdev=61.50 00:22:09.442 clat percentiles (msec): 00:22:09.442 | 1.00th=[ 211], 5.00th=[ 275], 10.00th=[ 317], 20.00th=[ 393], 00:22:09.442 | 30.00th=[ 401], 40.00th=[ 409], 50.00th=[ 418], 60.00th=[ 422], 00:22:09.442 | 70.00th=[ 426], 80.00th=[ 435], 90.00th=[ 451], 95.00th=[ 464], 00:22:09.442 | 99.00th=[ 567], 99.50th=[ 567], 99.90th=[ 584], 99.95th=[ 584], 00:22:09.442 | 99.99th=[ 584] 00:22:09.442 bw ( KiB/s): min= 112, max= 256, per=3.61%, avg=153.55, stdev=50.99, samples=20 00:22:09.442 iops : min= 28, max= 64, avg=38.35, stdev=12.77, samples=20 00:22:09.442 lat (msec) : 250=4.00%, 500=92.00%, 750=4.00% 00:22:09.442 cpu : usr=98.42%, sys=1.18%, ctx=14, majf=0, minf=18 00:22:09.442 IO depths : 1=3.8%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.8%, 32=0.0%, >=64=0.0% 00:22:09.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.442 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.442 issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.442 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.442 filename2: (groupid=0, jobs=1): err= 0: pid=3224040: Fri Apr 26 14:26:49 2024 00:22:09.442 read: IOPS=37, BW=152KiB/s (156kB/s)(1536KiB/10110msec) 00:22:09.442 slat (usec): min=21, max=127, avg=88.64, stdev=19.20 00:22:09.442 clat (msec): min=211, max=588, avg=420.52, stdev=54.85 00:22:09.442 lat (msec): min=211, max=588, avg=420.61, stdev=54.85 00:22:09.442 clat percentiles (msec): 00:22:09.442 | 1.00th=[ 268], 5.00th=[ 300], 10.00th=[ 376], 20.00th=[ 393], 00:22:09.442 | 30.00th=[ 405], 40.00th=[ 414], 50.00th=[ 414], 60.00th=[ 426], 00:22:09.442 | 70.00th=[ 430], 80.00th=[ 439], 90.00th=[ 472], 95.00th=[ 535], 00:22:09.442 | 99.00th=[ 584], 99.50th=[ 592], 99.90th=[ 592], 99.95th=[ 592], 00:22:09.442 | 99.99th=[ 592] 00:22:09.442 bw ( KiB/s): min= 112, max= 256, per=3.64%, avg=154.95, stdev=52.54, samples=19 00:22:09.442 iops : min= 28, max= 64, avg=38.74, stdev=13.14, samples=19 00:22:09.442 lat (msec) : 250=0.52%, 500=90.10%, 750=9.38% 00:22:09.442 cpu : usr=98.41%, sys=1.20%, ctx=14, majf=0, minf=21 00:22:09.442 IO depths : 1=3.6%, 2=9.9%, 4=25.0%, 8=52.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:22:09.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.442 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.442 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.442 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.442 filename2: (groupid=0, jobs=1): err= 0: pid=3224041: Fri Apr 26 14:26:49 2024 00:22:09.442 read: IOPS=53, BW=215KiB/s (220kB/s)(2176KiB/10140msec) 00:22:09.442 slat (usec): min=8, max=126, avg=19.99, stdev=15.82 00:22:09.442 clat (msec): min=254, max=432, avg=295.73, stdev=41.43 00:22:09.442 lat (msec): min=254, max=432, avg=295.75, stdev=41.44 00:22:09.442 clat percentiles (msec): 00:22:09.442 | 1.00th=[ 255], 5.00th=[ 266], 10.00th=[ 271], 20.00th=[ 271], 00:22:09.442 | 30.00th=[ 275], 40.00th=[ 284], 50.00th=[ 288], 60.00th=[ 288], 00:22:09.442 | 70.00th=[ 288], 80.00th=[ 296], 90.00th=[ 368], 95.00th=[ 401], 00:22:09.442 | 99.00th=[ 435], 99.50th=[ 435], 99.90th=[ 435], 99.95th=[ 435], 00:22:09.442 | 99.99th=[ 435] 00:22:09.442 bw ( KiB/s): min= 128, max= 256, per=4.98%, avg=211.15, stdev=56.25, samples=20 00:22:09.442 iops : min= 32, max= 64, avg=52.75, stdev=14.03, samples=20 00:22:09.442 lat (msec) : 500=100.00% 00:22:09.442 cpu : usr=98.70%, sys=0.92%, ctx=16, majf=0, minf=17 00:22:09.442 IO depths : 1=1.5%, 2=7.7%, 4=25.0%, 8=54.8%, 16=11.0%, 32=0.0%, >=64=0.0% 00:22:09.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.442 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.442 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.442 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.442 filename2: (groupid=0, jobs=1): err= 0: pid=3224042: Fri Apr 26 14:26:49 2024 00:22:09.442 read: IOPS=51, BW=204KiB/s (209kB/s)(2072KiB/10141msec) 00:22:09.442 slat (usec): min=8, max=162, avg=30.05, stdev=31.06 00:22:09.442 clat (msec): min=216, max=551, avg=311.92, stdev=65.17 00:22:09.442 lat (msec): min=216, max=551, avg=311.95, stdev=65.19 00:22:09.442 clat percentiles (msec): 00:22:09.442 | 1.00th=[ 218], 5.00th=[ 245], 10.00th=[ 255], 20.00th=[ 271], 00:22:09.442 | 30.00th=[ 279], 40.00th=[ 284], 50.00th=[ 288], 60.00th=[ 288], 00:22:09.442 | 70.00th=[ 296], 80.00th=[ 397], 90.00th=[ 426], 95.00th=[ 435], 00:22:09.442 | 99.00th=[ 447], 99.50th=[ 527], 99.90th=[ 550], 99.95th=[ 550], 00:22:09.442 | 99.99th=[ 550] 00:22:09.442 bw ( KiB/s): min= 128, max= 256, per=4.72%, avg=200.75, stdev=57.16, samples=20 00:22:09.442 iops : min= 32, max= 64, avg=50.15, stdev=14.25, samples=20 00:22:09.442 lat (msec) : 250=6.56%, 500=92.66%, 750=0.77% 00:22:09.442 cpu : usr=98.49%, sys=1.08%, ctx=16, majf=0, minf=31 00:22:09.442 IO depths : 1=1.0%, 2=6.9%, 4=24.1%, 8=56.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:22:09.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.442 complete : 0=0.0%, 4=94.0%, 8=0.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.442 issued rwts: total=518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.442 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.442 filename2: (groupid=0, jobs=1): err= 0: pid=3224043: Fri Apr 26 14:26:49 2024 00:22:09.442 read: IOPS=39, BW=158KiB/s (162kB/s)(1600KiB/10142msec) 00:22:09.442 slat (usec): min=14, max=161, avg=90.33, stdev=33.55 00:22:09.442 clat (msec): min=263, max=569, avg=404.93, stdev=55.76 00:22:09.442 lat (msec): min=263, max=569, avg=405.02, stdev=55.77 00:22:09.442 clat percentiles (msec): 00:22:09.442 | 1.00th=[ 275], 5.00th=[ 288], 10.00th=[ 296], 20.00th=[ 368], 00:22:09.442 | 30.00th=[ 401], 40.00th=[ 405], 50.00th=[ 414], 60.00th=[ 426], 00:22:09.442 | 70.00th=[ 430], 80.00th=[ 435], 90.00th=[ 443], 95.00th=[ 460], 00:22:09.442 | 99.00th=[ 558], 99.50th=[ 567], 99.90th=[ 567], 99.95th=[ 567], 00:22:09.442 | 99.99th=[ 567] 00:22:09.442 bw ( KiB/s): min= 127, max= 256, per=3.61%, avg=153.55, stdev=50.73, samples=20 00:22:09.442 iops : min= 31, max= 64, avg=38.35, stdev=12.70, samples=20 00:22:09.442 lat (msec) : 500=96.00%, 750=4.00% 00:22:09.442 cpu : usr=97.53%, sys=1.52%, ctx=201, majf=0, minf=18 00:22:09.442 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:22:09.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.442 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.442 issued rwts: total=400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.442 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.442 filename2: (groupid=0, jobs=1): err= 0: pid=3224044: Fri Apr 26 14:26:49 2024 00:22:09.442 read: IOPS=37, BW=151KiB/s (155kB/s)(1528KiB/10112msec) 00:22:09.442 slat (usec): min=11, max=131, avg=66.87, stdev=36.86 00:22:09.442 clat (msec): min=212, max=805, avg=422.55, stdev=67.71 00:22:09.442 lat (msec): min=212, max=805, avg=422.62, stdev=67.71 00:22:09.442 clat percentiles (msec): 00:22:09.442 | 1.00th=[ 213], 5.00th=[ 368], 10.00th=[ 388], 20.00th=[ 401], 00:22:09.442 | 30.00th=[ 409], 40.00th=[ 418], 50.00th=[ 422], 60.00th=[ 430], 00:22:09.442 | 70.00th=[ 435], 80.00th=[ 435], 90.00th=[ 460], 95.00th=[ 550], 00:22:09.442 | 99.00th=[ 617], 99.50th=[ 802], 99.90th=[ 802], 99.95th=[ 802], 00:22:09.442 | 99.99th=[ 802] 00:22:09.442 bw ( KiB/s): min= 128, max= 256, per=3.64%, avg=154.11, stdev=48.06, samples=19 00:22:09.442 iops : min= 32, max= 64, avg=38.53, stdev=12.02, samples=19 00:22:09.442 lat (msec) : 250=3.66%, 500=91.10%, 750=4.71%, 1000=0.52% 00:22:09.442 cpu : usr=98.49%, sys=1.10%, ctx=19, majf=0, minf=23 00:22:09.442 IO depths : 1=1.6%, 2=7.9%, 4=25.1%, 8=54.7%, 16=10.7%, 32=0.0%, >=64=0.0% 00:22:09.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.442 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.442 issued rwts: total=382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.442 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.442 filename2: (groupid=0, jobs=1): err= 0: pid=3224045: Fri Apr 26 14:26:49 2024 00:22:09.442 read: IOPS=37, BW=152KiB/s (155kB/s)(1536KiB/10137msec) 00:22:09.442 slat (usec): min=20, max=143, avg=91.13, stdev=27.27 00:22:09.442 clat (msec): min=267, max=645, avg=418.43, stdev=46.20 00:22:09.442 lat (msec): min=267, max=645, avg=418.52, stdev=46.20 00:22:09.442 clat percentiles (msec): 00:22:09.442 | 1.00th=[ 288], 5.00th=[ 359], 10.00th=[ 388], 20.00th=[ 401], 00:22:09.442 | 30.00th=[ 409], 40.00th=[ 414], 50.00th=[ 418], 60.00th=[ 418], 00:22:09.442 | 70.00th=[ 426], 80.00th=[ 435], 90.00th=[ 451], 95.00th=[ 472], 00:22:09.442 | 99.00th=[ 567], 99.50th=[ 642], 99.90th=[ 642], 99.95th=[ 642], 00:22:09.442 | 99.99th=[ 642] 00:22:09.442 bw ( KiB/s): min= 128, max= 256, per=3.64%, avg=154.95, stdev=47.72, samples=19 00:22:09.442 iops : min= 32, max= 64, avg=38.74, stdev=11.93, samples=19 00:22:09.442 lat (msec) : 500=95.31%, 750=4.69% 00:22:09.442 cpu : usr=98.48%, sys=1.10%, ctx=14, majf=0, minf=25 00:22:09.442 IO depths : 1=1.0%, 2=7.3%, 4=25.0%, 8=55.2%, 16=11.5%, 32=0.0%, >=64=0.0% 00:22:09.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.442 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.442 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.442 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.442 filename2: (groupid=0, jobs=1): err= 0: pid=3224046: Fri Apr 26 14:26:49 2024 00:22:09.442 read: IOPS=37, BW=152KiB/s (155kB/s)(1536KiB/10136msec) 00:22:09.443 slat (usec): min=16, max=204, avg=36.14, stdev=24.56 00:22:09.443 clat (msec): min=211, max=589, avg=422.03, stdev=58.81 00:22:09.443 lat (msec): min=212, max=589, avg=422.07, stdev=58.81 00:22:09.443 clat percentiles (msec): 00:22:09.443 | 1.00th=[ 268], 5.00th=[ 288], 10.00th=[ 376], 20.00th=[ 393], 00:22:09.443 | 30.00th=[ 405], 40.00th=[ 414], 50.00th=[ 414], 60.00th=[ 426], 00:22:09.443 | 70.00th=[ 430], 80.00th=[ 439], 90.00th=[ 472], 95.00th=[ 542], 00:22:09.443 | 99.00th=[ 584], 99.50th=[ 592], 99.90th=[ 592], 99.95th=[ 592], 00:22:09.443 | 99.99th=[ 592] 00:22:09.443 bw ( KiB/s): min= 112, max= 256, per=3.64%, avg=154.95, stdev=48.02, samples=19 00:22:09.443 iops : min= 28, max= 64, avg=38.74, stdev=12.00, samples=19 00:22:09.443 lat (msec) : 250=0.52%, 500=89.58%, 750=9.90% 00:22:09.443 cpu : usr=98.49%, sys=1.11%, ctx=12, majf=0, minf=20 00:22:09.443 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:22:09.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.443 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.443 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.443 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.443 filename2: (groupid=0, jobs=1): err= 0: pid=3224047: Fri Apr 26 14:26:49 2024 00:22:09.443 read: IOPS=41, BW=164KiB/s (168kB/s)(1664KiB/10141msec) 00:22:09.443 slat (usec): min=19, max=134, avg=85.63, stdev=23.76 00:22:09.443 clat (msec): min=216, max=522, avg=389.30, stdev=58.75 00:22:09.443 lat (msec): min=216, max=522, avg=389.38, stdev=58.76 00:22:09.443 clat percentiles (msec): 00:22:09.443 | 1.00th=[ 218], 5.00th=[ 288], 10.00th=[ 288], 20.00th=[ 359], 00:22:09.443 | 30.00th=[ 393], 40.00th=[ 405], 50.00th=[ 414], 60.00th=[ 422], 00:22:09.443 | 70.00th=[ 426], 80.00th=[ 430], 90.00th=[ 435], 95.00th=[ 439], 00:22:09.443 | 99.00th=[ 443], 99.50th=[ 443], 99.90th=[ 523], 99.95th=[ 523], 00:22:09.443 | 99.99th=[ 523] 00:22:09.443 bw ( KiB/s): min= 112, max= 256, per=3.75%, avg=159.95, stdev=53.73, samples=20 00:22:09.443 iops : min= 28, max= 64, avg=39.95, stdev=13.46, samples=20 00:22:09.443 lat (msec) : 250=3.85%, 500=95.67%, 750=0.48% 00:22:09.443 cpu : usr=98.47%, sys=1.13%, ctx=10, majf=0, minf=23 00:22:09.443 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:22:09.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.443 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.443 issued rwts: total=416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.443 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:09.443 00:22:09.443 Run status group 0 (all jobs): 00:22:09.443 READ: bw=4235KiB/s (4336kB/s), 151KiB/s-232KiB/s (155kB/s-238kB/s), io=42.0MiB (44.1MB), run=10021-10160msec 00:22:09.443 14:26:49 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:22:09.443 14:26:49 -- target/dif.sh@43 -- # local sub 00:22:09.443 14:26:49 -- target/dif.sh@45 -- # for sub in "$@" 00:22:09.443 14:26:49 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:09.443 14:26:49 -- target/dif.sh@36 -- # local sub_id=0 00:22:09.443 14:26:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:09.443 14:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.443 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:22:09.443 14:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.443 14:26:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:09.443 14:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.443 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:22:09.443 14:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.443 14:26:49 -- target/dif.sh@45 -- # for sub in "$@" 00:22:09.443 14:26:49 -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:09.443 14:26:49 -- target/dif.sh@36 -- # local sub_id=1 00:22:09.443 14:26:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:09.443 14:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.443 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:22:09.443 14:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.443 14:26:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:09.443 14:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.443 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:22:09.443 14:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.443 14:26:49 -- target/dif.sh@45 -- # for sub in "$@" 00:22:09.443 14:26:49 -- target/dif.sh@46 -- # destroy_subsystem 2 00:22:09.443 14:26:49 -- target/dif.sh@36 -- # local sub_id=2 00:22:09.443 14:26:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:09.443 14:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.443 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:22:09.443 14:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.443 14:26:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:22:09.443 14:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.443 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:22:09.443 14:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.443 14:26:49 -- target/dif.sh@115 -- # NULL_DIF=1 00:22:09.443 14:26:49 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:22:09.443 14:26:49 -- target/dif.sh@115 -- # numjobs=2 00:22:09.443 14:26:49 -- target/dif.sh@115 -- # iodepth=8 00:22:09.443 14:26:49 -- target/dif.sh@115 -- # runtime=5 00:22:09.443 14:26:49 -- target/dif.sh@115 -- # files=1 00:22:09.443 14:26:49 -- target/dif.sh@117 -- # create_subsystems 0 1 00:22:09.443 14:26:49 -- target/dif.sh@28 -- # local sub 00:22:09.443 14:26:49 -- target/dif.sh@30 -- # for sub in "$@" 00:22:09.443 14:26:49 -- target/dif.sh@31 -- # create_subsystem 0 00:22:09.443 14:26:49 -- target/dif.sh@18 -- # local sub_id=0 00:22:09.443 14:26:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:09.443 14:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.443 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:22:09.443 bdev_null0 00:22:09.443 14:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.443 14:26:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:09.443 14:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.443 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:22:09.443 14:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.443 14:26:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:09.443 14:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.443 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:22:09.443 14:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.443 14:26:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:09.443 14:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.443 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:22:09.443 [2024-04-26 14:26:49.389770] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.443 14:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.443 14:26:49 -- target/dif.sh@30 -- # for sub in "$@" 00:22:09.443 14:26:49 -- target/dif.sh@31 -- # create_subsystem 1 00:22:09.443 14:26:49 -- target/dif.sh@18 -- # local sub_id=1 00:22:09.443 14:26:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:09.443 14:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.443 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:22:09.443 bdev_null1 00:22:09.443 14:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.443 14:26:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:09.443 14:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.443 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:22:09.443 14:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.443 14:26:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:09.443 14:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.443 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:22:09.443 14:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.443 14:26:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:09.443 14:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.443 14:26:49 -- common/autotest_common.sh@10 -- # set +x 00:22:09.443 14:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.443 14:26:49 -- target/dif.sh@118 -- # fio /dev/fd/62 00:22:09.443 14:26:49 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:22:09.443 14:26:49 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:09.443 14:26:49 -- nvmf/common.sh@521 -- # config=() 00:22:09.443 14:26:49 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:09.443 14:26:49 -- nvmf/common.sh@521 -- # local subsystem config 00:22:09.443 14:26:49 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:09.443 14:26:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:09.443 14:26:49 -- target/dif.sh@82 -- # gen_fio_conf 00:22:09.443 14:26:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:09.443 { 00:22:09.443 "params": { 00:22:09.443 "name": "Nvme$subsystem", 00:22:09.443 "trtype": "$TEST_TRANSPORT", 00:22:09.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.443 "adrfam": "ipv4", 00:22:09.443 "trsvcid": "$NVMF_PORT", 00:22:09.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.443 "hdgst": ${hdgst:-false}, 00:22:09.443 "ddgst": ${ddgst:-false} 00:22:09.443 }, 00:22:09.443 "method": "bdev_nvme_attach_controller" 00:22:09.443 } 00:22:09.443 EOF 00:22:09.443 )") 00:22:09.443 14:26:49 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:09.443 14:26:49 -- target/dif.sh@54 -- # local file 00:22:09.443 14:26:49 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:09.443 14:26:49 -- target/dif.sh@56 -- # cat 00:22:09.443 14:26:49 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:09.443 14:26:49 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:09.443 14:26:49 -- common/autotest_common.sh@1327 -- # shift 00:22:09.443 14:26:49 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:09.443 14:26:49 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:09.443 14:26:49 -- nvmf/common.sh@543 -- # cat 00:22:09.444 14:26:49 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:09.444 14:26:49 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:09.444 14:26:49 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:09.444 14:26:49 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:09.444 14:26:49 -- target/dif.sh@72 -- # (( file <= files )) 00:22:09.444 14:26:49 -- target/dif.sh@73 -- # cat 00:22:09.444 14:26:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:09.444 14:26:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:09.444 { 00:22:09.444 "params": { 00:22:09.444 "name": "Nvme$subsystem", 00:22:09.444 "trtype": "$TEST_TRANSPORT", 00:22:09.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:09.444 "adrfam": "ipv4", 00:22:09.444 "trsvcid": "$NVMF_PORT", 00:22:09.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:09.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:09.444 "hdgst": ${hdgst:-false}, 00:22:09.444 "ddgst": ${ddgst:-false} 00:22:09.444 }, 00:22:09.444 "method": "bdev_nvme_attach_controller" 00:22:09.444 } 00:22:09.444 EOF 00:22:09.444 )") 00:22:09.444 14:26:49 -- target/dif.sh@72 -- # (( file++ )) 00:22:09.444 14:26:49 -- target/dif.sh@72 -- # (( file <= files )) 00:22:09.444 14:26:49 -- nvmf/common.sh@543 -- # cat 00:22:09.444 14:26:49 -- nvmf/common.sh@545 -- # jq . 00:22:09.444 14:26:49 -- nvmf/common.sh@546 -- # IFS=, 00:22:09.444 14:26:49 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:09.444 "params": { 00:22:09.444 "name": "Nvme0", 00:22:09.444 "trtype": "tcp", 00:22:09.444 "traddr": "10.0.0.2", 00:22:09.444 "adrfam": "ipv4", 00:22:09.444 "trsvcid": "4420", 00:22:09.444 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:09.444 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:09.444 "hdgst": false, 00:22:09.444 "ddgst": false 00:22:09.444 }, 00:22:09.444 "method": "bdev_nvme_attach_controller" 00:22:09.444 },{ 00:22:09.444 "params": { 00:22:09.444 "name": "Nvme1", 00:22:09.444 "trtype": "tcp", 00:22:09.444 "traddr": "10.0.0.2", 00:22:09.444 "adrfam": "ipv4", 00:22:09.444 "trsvcid": "4420", 00:22:09.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:09.444 "hdgst": false, 00:22:09.444 "ddgst": false 00:22:09.444 }, 00:22:09.444 "method": "bdev_nvme_attach_controller" 00:22:09.444 }' 00:22:09.444 14:26:49 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:09.444 14:26:49 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:09.444 14:26:49 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:09.444 14:26:49 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:09.444 14:26:49 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:09.444 14:26:49 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:09.444 14:26:49 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:09.444 14:26:49 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:09.444 14:26:49 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:09.444 14:26:49 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:09.444 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:09.444 ... 00:22:09.444 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:09.444 ... 00:22:09.444 fio-3.35 00:22:09.444 Starting 4 threads 00:22:09.444 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.705 00:22:14.705 filename0: (groupid=0, jobs=1): err= 0: pid=3225105: Fri Apr 26 14:26:55 2024 00:22:14.705 read: IOPS=1593, BW=12.5MiB/s (13.1MB/s)(62.3MiB/5003msec) 00:22:14.705 slat (nsec): min=7544, max=54322, avg=13754.65, stdev=8359.77 00:22:14.706 clat (usec): min=954, max=9121, avg=4970.82, stdev=517.70 00:22:14.706 lat (usec): min=972, max=9163, avg=4984.58, stdev=517.86 00:22:14.706 clat percentiles (usec): 00:22:14.706 | 1.00th=[ 3589], 5.00th=[ 4228], 10.00th=[ 4490], 20.00th=[ 4817], 00:22:14.706 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 4948], 60.00th=[ 5014], 00:22:14.706 | 70.00th=[ 5080], 80.00th=[ 5080], 90.00th=[ 5276], 95.00th=[ 5800], 00:22:14.706 | 99.00th=[ 6980], 99.50th=[ 7439], 99.90th=[ 8356], 99.95th=[ 8979], 00:22:14.706 | 99.99th=[ 9110] 00:22:14.706 bw ( KiB/s): min=12560, max=13184, per=25.12%, avg=12745.60, stdev=196.86, samples=10 00:22:14.706 iops : min= 1570, max= 1648, avg=1593.20, stdev=24.61, samples=10 00:22:14.706 lat (usec) : 1000=0.01% 00:22:14.706 lat (msec) : 2=0.10%, 4=2.17%, 10=97.72% 00:22:14.706 cpu : usr=95.12%, sys=4.32%, ctx=7, majf=0, minf=10 00:22:14.706 IO depths : 1=0.4%, 2=19.6%, 4=53.8%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:14.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.706 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.706 issued rwts: total=7974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.706 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:14.706 filename0: (groupid=0, jobs=1): err= 0: pid=3225106: Fri Apr 26 14:26:55 2024 00:22:14.706 read: IOPS=1526, BW=11.9MiB/s (12.5MB/s)(59.7MiB/5003msec) 00:22:14.706 slat (nsec): min=7556, max=62184, avg=18327.37, stdev=9191.27 00:22:14.706 clat (usec): min=892, max=9406, avg=5168.93, stdev=863.05 00:22:14.706 lat (usec): min=921, max=9437, avg=5187.25, stdev=861.95 00:22:14.706 clat percentiles (usec): 00:22:14.706 | 1.00th=[ 2966], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 4883], 00:22:14.706 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 4948], 60.00th=[ 5014], 00:22:14.706 | 70.00th=[ 5080], 80.00th=[ 5276], 90.00th=[ 5997], 95.00th=[ 7111], 00:22:14.706 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[ 9241], 99.95th=[ 9372], 00:22:14.706 | 99.99th=[ 9372] 00:22:14.706 bw ( KiB/s): min=11680, max=12624, per=24.07%, avg=12209.60, stdev=329.76, samples=10 00:22:14.706 iops : min= 1460, max= 1578, avg=1526.20, stdev=41.22, samples=10 00:22:14.706 lat (usec) : 1000=0.04% 00:22:14.706 lat (msec) : 2=0.35%, 4=2.26%, 10=97.34% 00:22:14.706 cpu : usr=95.50%, sys=4.04%, ctx=7, majf=0, minf=0 00:22:14.706 IO depths : 1=0.3%, 2=17.2%, 4=55.6%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:14.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.706 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.706 issued rwts: total=7639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.706 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:14.706 filename1: (groupid=0, jobs=1): err= 0: pid=3225107: Fri Apr 26 14:26:55 2024 00:22:14.706 read: IOPS=1609, BW=12.6MiB/s (13.2MB/s)(62.9MiB/5002msec) 00:22:14.706 slat (nsec): min=9075, max=80718, avg=19118.36, stdev=6545.47 00:22:14.706 clat (usec): min=1252, max=9728, avg=4893.74, stdev=469.22 00:22:14.706 lat (usec): min=1274, max=9751, avg=4912.86, stdev=469.55 00:22:14.706 clat percentiles (usec): 00:22:14.706 | 1.00th=[ 3490], 5.00th=[ 4228], 10.00th=[ 4490], 20.00th=[ 4817], 00:22:14.706 | 30.00th=[ 4883], 40.00th=[ 4883], 50.00th=[ 4948], 60.00th=[ 4948], 00:22:14.706 | 70.00th=[ 5014], 80.00th=[ 5014], 90.00th=[ 5080], 95.00th=[ 5211], 00:22:14.706 | 99.00th=[ 6652], 99.50th=[ 7570], 99.90th=[ 8160], 99.95th=[ 8586], 00:22:14.706 | 99.99th=[ 9765] 00:22:14.706 bw ( KiB/s): min=12440, max=13184, per=25.37%, avg=12871.20, stdev=218.68, samples=10 00:22:14.706 iops : min= 1555, max= 1648, avg=1608.90, stdev=27.34, samples=10 00:22:14.706 lat (msec) : 2=0.22%, 4=1.86%, 10=97.91% 00:22:14.706 cpu : usr=88.76%, sys=7.32%, ctx=188, majf=0, minf=0 00:22:14.706 IO depths : 1=0.8%, 2=24.0%, 4=50.6%, 8=24.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:14.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.706 complete : 0=0.0%, 4=90.3%, 8=9.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.706 issued rwts: total=8051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.706 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:14.706 filename1: (groupid=0, jobs=1): err= 0: pid=3225108: Fri Apr 26 14:26:55 2024 00:22:14.706 read: IOPS=1611, BW=12.6MiB/s (13.2MB/s)(63.0MiB/5002msec) 00:22:14.706 slat (nsec): min=7502, max=62009, avg=11829.99, stdev=6004.45 00:22:14.706 clat (usec): min=1290, max=8889, avg=4924.21, stdev=521.05 00:22:14.706 lat (usec): min=1299, max=8919, avg=4936.04, stdev=521.22 00:22:14.706 clat percentiles (usec): 00:22:14.706 | 1.00th=[ 2999], 5.00th=[ 4228], 10.00th=[ 4424], 20.00th=[ 4686], 00:22:14.706 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 4948], 60.00th=[ 5014], 00:22:14.706 | 70.00th=[ 5080], 80.00th=[ 5080], 90.00th=[ 5276], 95.00th=[ 5538], 00:22:14.706 | 99.00th=[ 6718], 99.50th=[ 7177], 99.90th=[ 8225], 99.95th=[ 8356], 00:22:14.706 | 99.99th=[ 8848] 00:22:14.706 bw ( KiB/s): min=12608, max=13184, per=25.40%, avg=12885.70, stdev=200.16, samples=10 00:22:14.706 iops : min= 1576, max= 1648, avg=1610.70, stdev=25.03, samples=10 00:22:14.706 lat (msec) : 2=0.05%, 4=3.46%, 10=96.49% 00:22:14.706 cpu : usr=94.96%, sys=4.52%, ctx=8, majf=0, minf=9 00:22:14.706 IO depths : 1=0.5%, 2=14.7%, 4=58.0%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:14.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.706 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:14.706 issued rwts: total=8060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:14.706 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:14.706 00:22:14.706 Run status group 0 (all jobs): 00:22:14.706 READ: bw=49.5MiB/s (51.9MB/s), 11.9MiB/s-12.6MiB/s (12.5MB/s-13.2MB/s), io=248MiB (260MB), run=5002-5003msec 00:22:14.706 14:26:55 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:22:14.706 14:26:55 -- target/dif.sh@43 -- # local sub 00:22:14.706 14:26:55 -- target/dif.sh@45 -- # for sub in "$@" 00:22:14.706 14:26:55 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:14.706 14:26:55 -- target/dif.sh@36 -- # local sub_id=0 00:22:14.706 14:26:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:14.706 14:26:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.706 14:26:55 -- common/autotest_common.sh@10 -- # set +x 00:22:14.706 14:26:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.706 14:26:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:14.706 14:26:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.706 14:26:55 -- common/autotest_common.sh@10 -- # set +x 00:22:14.706 14:26:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.706 14:26:55 -- target/dif.sh@45 -- # for sub in "$@" 00:22:14.706 14:26:55 -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:14.706 14:26:55 -- target/dif.sh@36 -- # local sub_id=1 00:22:14.706 14:26:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:14.706 14:26:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.706 14:26:55 -- common/autotest_common.sh@10 -- # set +x 00:22:14.706 14:26:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.706 14:26:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:14.706 14:26:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.706 14:26:55 -- common/autotest_common.sh@10 -- # set +x 00:22:14.706 14:26:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.706 00:22:14.706 real 0m24.052s 00:22:14.706 user 4m35.539s 00:22:14.706 sys 0m5.733s 00:22:14.706 14:26:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:14.706 14:26:55 -- common/autotest_common.sh@10 -- # set +x 00:22:14.706 ************************************ 00:22:14.706 END TEST fio_dif_rand_params 00:22:14.706 ************************************ 00:22:14.706 14:26:55 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:22:14.706 14:26:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:14.706 14:26:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:14.706 14:26:55 -- common/autotest_common.sh@10 -- # set +x 00:22:14.706 ************************************ 00:22:14.706 START TEST fio_dif_digest 00:22:14.706 ************************************ 00:22:14.706 14:26:55 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:22:14.706 14:26:55 -- target/dif.sh@123 -- # local NULL_DIF 00:22:14.706 14:26:55 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:22:14.706 14:26:55 -- target/dif.sh@125 -- # local hdgst ddgst 00:22:14.706 14:26:55 -- target/dif.sh@127 -- # NULL_DIF=3 00:22:14.706 14:26:55 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:22:14.706 14:26:55 -- target/dif.sh@127 -- # numjobs=3 00:22:14.706 14:26:55 -- target/dif.sh@127 -- # iodepth=3 00:22:14.706 14:26:55 -- target/dif.sh@127 -- # runtime=10 00:22:14.706 14:26:55 -- target/dif.sh@128 -- # hdgst=true 00:22:14.706 14:26:55 -- target/dif.sh@128 -- # ddgst=true 00:22:14.706 14:26:55 -- target/dif.sh@130 -- # create_subsystems 0 00:22:14.706 14:26:55 -- target/dif.sh@28 -- # local sub 00:22:14.706 14:26:55 -- target/dif.sh@30 -- # for sub in "$@" 00:22:14.706 14:26:55 -- target/dif.sh@31 -- # create_subsystem 0 00:22:14.706 14:26:55 -- target/dif.sh@18 -- # local sub_id=0 00:22:14.706 14:26:55 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:14.706 14:26:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.706 14:26:55 -- common/autotest_common.sh@10 -- # set +x 00:22:14.706 bdev_null0 00:22:14.706 14:26:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.706 14:26:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:14.706 14:26:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.706 14:26:55 -- common/autotest_common.sh@10 -- # set +x 00:22:14.706 14:26:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.706 14:26:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:14.706 14:26:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.706 14:26:55 -- common/autotest_common.sh@10 -- # set +x 00:22:14.706 14:26:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.706 14:26:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:14.706 14:26:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.706 14:26:55 -- common/autotest_common.sh@10 -- # set +x 00:22:14.706 [2024-04-26 14:26:55.885087] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.706 14:26:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.706 14:26:55 -- target/dif.sh@131 -- # fio /dev/fd/62 00:22:14.706 14:26:55 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:22:14.706 14:26:55 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:14.706 14:26:55 -- nvmf/common.sh@521 -- # config=() 00:22:14.706 14:26:55 -- nvmf/common.sh@521 -- # local subsystem config 00:22:14.707 14:26:55 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:14.707 14:26:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:14.707 14:26:55 -- target/dif.sh@82 -- # gen_fio_conf 00:22:14.707 14:26:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:14.707 { 00:22:14.707 "params": { 00:22:14.707 "name": "Nvme$subsystem", 00:22:14.707 "trtype": "$TEST_TRANSPORT", 00:22:14.707 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:14.707 "adrfam": "ipv4", 00:22:14.707 "trsvcid": "$NVMF_PORT", 00:22:14.707 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:14.707 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:14.707 "hdgst": ${hdgst:-false}, 00:22:14.707 "ddgst": ${ddgst:-false} 00:22:14.707 }, 00:22:14.707 "method": "bdev_nvme_attach_controller" 00:22:14.707 } 00:22:14.707 EOF 00:22:14.707 )") 00:22:14.707 14:26:55 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:14.707 14:26:55 -- target/dif.sh@54 -- # local file 00:22:14.707 14:26:55 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:14.707 14:26:55 -- target/dif.sh@56 -- # cat 00:22:14.707 14:26:55 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:14.707 14:26:55 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:14.707 14:26:55 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:14.707 14:26:55 -- common/autotest_common.sh@1327 -- # shift 00:22:14.707 14:26:55 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:14.707 14:26:55 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:14.707 14:26:55 -- nvmf/common.sh@543 -- # cat 00:22:14.707 14:26:55 -- target/dif.sh@72 -- # (( file = 1 )) 00:22:14.707 14:26:55 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:14.707 14:26:55 -- target/dif.sh@72 -- # (( file <= files )) 00:22:14.707 14:26:55 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:14.707 14:26:55 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:14.707 14:26:55 -- nvmf/common.sh@545 -- # jq . 00:22:14.707 14:26:55 -- nvmf/common.sh@546 -- # IFS=, 00:22:14.707 14:26:55 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:14.707 "params": { 00:22:14.707 "name": "Nvme0", 00:22:14.707 "trtype": "tcp", 00:22:14.707 "traddr": "10.0.0.2", 00:22:14.707 "adrfam": "ipv4", 00:22:14.707 "trsvcid": "4420", 00:22:14.707 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:14.707 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:14.707 "hdgst": true, 00:22:14.707 "ddgst": true 00:22:14.707 }, 00:22:14.707 "method": "bdev_nvme_attach_controller" 00:22:14.707 }' 00:22:14.707 14:26:55 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:14.707 14:26:55 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:14.707 14:26:55 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:14.707 14:26:55 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:22:14.707 14:26:55 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:14.707 14:26:55 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:14.707 14:26:55 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:14.707 14:26:55 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:14.707 14:26:55 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:22:14.707 14:26:55 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:14.707 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:14.707 ... 00:22:14.707 fio-3.35 00:22:14.707 Starting 3 threads 00:22:14.707 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.904 00:22:26.904 filename0: (groupid=0, jobs=1): err= 0: pid=3225773: Fri Apr 26 14:27:06 2024 00:22:26.904 read: IOPS=184, BW=23.1MiB/s (24.2MB/s)(232MiB/10049msec) 00:22:26.904 slat (nsec): min=6139, max=40427, avg=22033.63, stdev=4119.16 00:22:26.904 clat (usec): min=12344, max=52087, avg=16193.84, stdev=1678.18 00:22:26.904 lat (usec): min=12366, max=52110, avg=16215.88, stdev=1678.20 00:22:26.904 clat percentiles (usec): 00:22:26.904 | 1.00th=[13435], 5.00th=[14091], 10.00th=[14615], 20.00th=[15139], 00:22:26.904 | 30.00th=[15533], 40.00th=[15795], 50.00th=[16057], 60.00th=[16319], 00:22:26.904 | 70.00th=[16712], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:22:26.904 | 99.00th=[19268], 99.50th=[19792], 99.90th=[50070], 99.95th=[52167], 00:22:26.904 | 99.99th=[52167] 00:22:26.904 bw ( KiB/s): min=22272, max=25088, per=33.23%, avg=23718.40, stdev=948.00, samples=20 00:22:26.904 iops : min= 174, max= 196, avg=185.30, stdev= 7.41, samples=20 00:22:26.904 lat (msec) : 20=99.62%, 50=0.27%, 100=0.11% 00:22:26.904 cpu : usr=95.62%, sys=3.90%, ctx=24, majf=0, minf=145 00:22:26.904 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:26.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.904 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.904 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:26.904 filename0: (groupid=0, jobs=1): err= 0: pid=3225774: Fri Apr 26 14:27:06 2024 00:22:26.904 read: IOPS=183, BW=22.9MiB/s (24.0MB/s)(230MiB/10048msec) 00:22:26.904 slat (nsec): min=5486, max=38471, avg=15188.67, stdev=3830.58 00:22:26.904 clat (usec): min=12943, max=55053, avg=16311.95, stdev=1572.64 00:22:26.904 lat (usec): min=12955, max=55065, avg=16327.14, stdev=1572.49 00:22:26.904 clat percentiles (usec): 00:22:26.904 | 1.00th=[13829], 5.00th=[14615], 10.00th=[15008], 20.00th=[15533], 00:22:26.904 | 30.00th=[15795], 40.00th=[16057], 50.00th=[16188], 60.00th=[16450], 00:22:26.904 | 70.00th=[16712], 80.00th=[17171], 90.00th=[17433], 95.00th=[17957], 00:22:26.904 | 99.00th=[18744], 99.50th=[19006], 99.90th=[50070], 99.95th=[55313], 00:22:26.904 | 99.99th=[55313] 00:22:26.904 bw ( KiB/s): min=22528, max=24320, per=33.02%, avg=23567.10, stdev=527.28, samples=20 00:22:26.904 iops : min= 176, max= 190, avg=184.10, stdev= 4.13, samples=20 00:22:26.904 lat (msec) : 20=99.73%, 50=0.22%, 100=0.05% 00:22:26.904 cpu : usr=94.98%, sys=4.60%, ctx=22, majf=0, minf=119 00:22:26.904 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:26.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.904 issued rwts: total=1843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.904 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:26.904 filename0: (groupid=0, jobs=1): err= 0: pid=3225775: Fri Apr 26 14:27:06 2024 00:22:26.904 read: IOPS=189, BW=23.7MiB/s (24.8MB/s)(238MiB/10049msec) 00:22:26.904 slat (nsec): min=5708, max=57503, avg=15094.68, stdev=3755.72 00:22:26.904 clat (usec): min=12058, max=52549, avg=15784.16, stdev=1578.06 00:22:26.904 lat (usec): min=12071, max=52569, avg=15799.26, stdev=1578.10 00:22:26.904 clat percentiles (usec): 00:22:26.904 | 1.00th=[13304], 5.00th=[13960], 10.00th=[14484], 20.00th=[14877], 00:22:26.904 | 30.00th=[15139], 40.00th=[15533], 50.00th=[15795], 60.00th=[15926], 00:22:26.904 | 70.00th=[16188], 80.00th=[16581], 90.00th=[17171], 95.00th=[17433], 00:22:26.904 | 99.00th=[18744], 99.50th=[19006], 99.90th=[50594], 99.95th=[52691], 00:22:26.904 | 99.99th=[52691] 00:22:26.904 bw ( KiB/s): min=23552, max=25344, per=34.11%, avg=24345.60, stdev=609.78, samples=20 00:22:26.904 iops : min= 184, max= 198, avg=190.20, stdev= 4.76, samples=20 00:22:26.904 lat (msec) : 20=99.79%, 50=0.10%, 100=0.10% 00:22:26.904 cpu : usr=94.74%, sys=4.87%, ctx=25, majf=0, minf=151 00:22:26.904 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:26.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.904 issued rwts: total=1905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.904 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:26.904 00:22:26.904 Run status group 0 (all jobs): 00:22:26.904 READ: bw=69.7MiB/s (73.1MB/s), 22.9MiB/s-23.7MiB/s (24.0MB/s-24.8MB/s), io=701MiB (735MB), run=10048-10049msec 00:22:26.904 14:27:06 -- target/dif.sh@132 -- # destroy_subsystems 0 00:22:26.904 14:27:06 -- target/dif.sh@43 -- # local sub 00:22:26.904 14:27:06 -- target/dif.sh@45 -- # for sub in "$@" 00:22:26.904 14:27:06 -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:26.904 14:27:06 -- target/dif.sh@36 -- # local sub_id=0 00:22:26.904 14:27:06 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:26.904 14:27:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.904 14:27:06 -- common/autotest_common.sh@10 -- # set +x 00:22:26.904 14:27:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.904 14:27:06 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:26.904 14:27:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.904 14:27:06 -- common/autotest_common.sh@10 -- # set +x 00:22:26.904 14:27:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.904 00:22:26.904 real 0m11.039s 00:22:26.904 user 0m29.517s 00:22:26.904 sys 0m1.580s 00:22:26.904 14:27:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:26.904 14:27:06 -- common/autotest_common.sh@10 -- # set +x 00:22:26.904 ************************************ 00:22:26.904 END TEST fio_dif_digest 00:22:26.904 ************************************ 00:22:26.904 14:27:06 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:22:26.904 14:27:06 -- target/dif.sh@147 -- # nvmftestfini 00:22:26.904 14:27:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:26.904 14:27:06 -- nvmf/common.sh@117 -- # sync 00:22:26.904 14:27:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:26.904 14:27:06 -- nvmf/common.sh@120 -- # set +e 00:22:26.904 14:27:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:26.904 14:27:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:26.904 rmmod nvme_tcp 00:22:26.904 rmmod nvme_fabrics 00:22:26.904 rmmod nvme_keyring 00:22:26.904 14:27:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:26.904 14:27:06 -- nvmf/common.sh@124 -- # set -e 00:22:26.904 14:27:06 -- nvmf/common.sh@125 -- # return 0 00:22:26.904 14:27:06 -- nvmf/common.sh@478 -- # '[' -n 3221002 ']' 00:22:26.904 14:27:06 -- nvmf/common.sh@479 -- # killprocess 3221002 00:22:26.904 14:27:06 -- common/autotest_common.sh@936 -- # '[' -z 3221002 ']' 00:22:26.904 14:27:06 -- common/autotest_common.sh@940 -- # kill -0 3221002 00:22:26.904 14:27:06 -- common/autotest_common.sh@941 -- # uname 00:22:26.904 14:27:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:26.904 14:27:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3221002 00:22:26.904 14:27:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:26.904 14:27:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:26.904 14:27:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3221002' 00:22:26.904 killing process with pid 3221002 00:22:26.904 14:27:07 -- common/autotest_common.sh@955 -- # kill 3221002 00:22:26.904 14:27:07 -- common/autotest_common.sh@960 -- # wait 3221002 00:22:26.904 14:27:07 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:22:26.904 14:27:07 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:26.904 Waiting for block devices as requested 00:22:26.904 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:22:26.904 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:22:26.904 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:22:26.904 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:22:27.164 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:22:27.164 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:22:27.164 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:22:27.164 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:22:27.421 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:22:27.421 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:22:27.421 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:22:27.421 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:22:27.680 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:22:27.680 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:22:27.680 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:22:27.680 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:22:27.940 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:22:27.940 14:27:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:27.940 14:27:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:27.940 14:27:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:27.940 14:27:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:27.940 14:27:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.940 14:27:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:27.940 14:27:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.938 14:27:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:29.938 00:22:29.938 real 1m5.927s 00:22:29.938 user 6m31.836s 00:22:29.938 sys 0m16.153s 00:22:29.938 14:27:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:29.938 14:27:11 -- common/autotest_common.sh@10 -- # set +x 00:22:29.938 ************************************ 00:22:29.938 END TEST nvmf_dif 00:22:29.938 ************************************ 00:22:29.938 14:27:11 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:29.938 14:27:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:29.938 14:27:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:29.938 14:27:11 -- common/autotest_common.sh@10 -- # set +x 00:22:29.938 ************************************ 00:22:29.938 START TEST nvmf_abort_qd_sizes 00:22:29.938 ************************************ 00:22:29.938 14:27:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:22:30.197 * Looking for test storage... 00:22:30.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:30.197 14:27:11 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:30.197 14:27:11 -- nvmf/common.sh@7 -- # uname -s 00:22:30.197 14:27:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.197 14:27:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.197 14:27:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:30.197 14:27:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:30.197 14:27:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:30.197 14:27:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:30.197 14:27:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:30.197 14:27:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:30.197 14:27:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:30.197 14:27:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:30.197 14:27:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:30.197 14:27:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:22:30.197 14:27:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:30.197 14:27:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:30.197 14:27:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:30.197 14:27:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:30.197 14:27:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:30.197 14:27:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.197 14:27:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.197 14:27:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.197 14:27:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.197 14:27:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.197 14:27:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.197 14:27:11 -- paths/export.sh@5 -- # export PATH 00:22:30.197 14:27:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.197 14:27:11 -- nvmf/common.sh@47 -- # : 0 00:22:30.197 14:27:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:30.197 14:27:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:30.197 14:27:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:30.197 14:27:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:30.197 14:27:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:30.197 14:27:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:30.197 14:27:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:30.197 14:27:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:30.197 14:27:11 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:22:30.197 14:27:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:30.197 14:27:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.197 14:27:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:30.197 14:27:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:30.197 14:27:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:30.197 14:27:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.197 14:27:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:30.197 14:27:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.197 14:27:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:30.197 14:27:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:30.197 14:27:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:30.197 14:27:11 -- common/autotest_common.sh@10 -- # set +x 00:22:31.574 14:27:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:31.574 14:27:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:31.574 14:27:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:31.574 14:27:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:31.574 14:27:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:31.574 14:27:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:31.574 14:27:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:31.574 14:27:13 -- nvmf/common.sh@295 -- # net_devs=() 00:22:31.574 14:27:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:31.574 14:27:13 -- nvmf/common.sh@296 -- # e810=() 00:22:31.574 14:27:13 -- nvmf/common.sh@296 -- # local -ga e810 00:22:31.574 14:27:13 -- nvmf/common.sh@297 -- # x722=() 00:22:31.574 14:27:13 -- nvmf/common.sh@297 -- # local -ga x722 00:22:31.574 14:27:13 -- nvmf/common.sh@298 -- # mlx=() 00:22:31.574 14:27:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:31.574 14:27:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.574 14:27:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.574 14:27:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.574 14:27:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.575 14:27:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.575 14:27:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.575 14:27:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.575 14:27:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.575 14:27:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.575 14:27:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.575 14:27:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.575 14:27:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:31.575 14:27:13 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:31.575 14:27:13 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:31.575 14:27:13 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:31.575 14:27:13 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:31.575 14:27:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:31.575 14:27:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.575 14:27:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:22:31.575 Found 0000:08:00.0 (0x8086 - 0x159b) 00:22:31.575 14:27:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:31.575 14:27:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:31.575 14:27:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.575 14:27:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.575 14:27:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:31.575 14:27:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.575 14:27:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:22:31.575 Found 0000:08:00.1 (0x8086 - 0x159b) 00:22:31.575 14:27:13 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:31.575 14:27:13 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:31.575 14:27:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.575 14:27:13 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.575 14:27:13 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:31.575 14:27:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:31.575 14:27:13 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:31.575 14:27:13 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:31.575 14:27:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.575 14:27:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.575 14:27:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:31.575 14:27:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.575 14:27:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:22:31.575 Found net devices under 0000:08:00.0: cvl_0_0 00:22:31.575 14:27:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.575 14:27:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.575 14:27:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.575 14:27:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:31.575 14:27:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.575 14:27:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:22:31.575 Found net devices under 0000:08:00.1: cvl_0_1 00:22:31.575 14:27:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.575 14:27:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:31.575 14:27:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:31.575 14:27:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:31.575 14:27:13 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:31.575 14:27:13 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:31.575 14:27:13 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.575 14:27:13 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.575 14:27:13 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.575 14:27:13 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:31.575 14:27:13 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:31.575 14:27:13 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:31.575 14:27:13 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:31.575 14:27:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:31.575 14:27:13 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.575 14:27:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:31.575 14:27:13 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:31.575 14:27:13 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:31.575 14:27:13 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.834 14:27:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.834 14:27:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.834 14:27:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:31.834 14:27:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.834 14:27:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.834 14:27:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.834 14:27:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:31.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:22:31.834 00:22:31.834 --- 10.0.0.2 ping statistics --- 00:22:31.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.834 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:22:31.834 14:27:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:22:31.834 00:22:31.834 --- 10.0.0.1 ping statistics --- 00:22:31.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.834 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:22:31.834 14:27:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.834 14:27:13 -- nvmf/common.sh@411 -- # return 0 00:22:31.834 14:27:13 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:22:31.834 14:27:13 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:32.770 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:22:32.770 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:22:32.770 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:22:32.770 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:22:32.770 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:22:32.770 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:22:32.770 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:22:32.770 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:22:32.770 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:22:32.770 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:22:32.770 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:22:32.770 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:22:32.770 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:22:32.770 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:22:32.770 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:22:32.770 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:22:33.708 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:22:33.708 14:27:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.708 14:27:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:33.708 14:27:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:33.708 14:27:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.708 14:27:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:33.708 14:27:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:33.708 14:27:15 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:22:33.708 14:27:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:33.708 14:27:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:33.708 14:27:15 -- common/autotest_common.sh@10 -- # set +x 00:22:33.708 14:27:15 -- nvmf/common.sh@470 -- # nvmfpid=3230138 00:22:33.708 14:27:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:22:33.708 14:27:15 -- nvmf/common.sh@471 -- # waitforlisten 3230138 00:22:33.708 14:27:15 -- common/autotest_common.sh@817 -- # '[' -z 3230138 ']' 00:22:33.708 14:27:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.708 14:27:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:33.708 14:27:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.708 14:27:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:33.708 14:27:15 -- common/autotest_common.sh@10 -- # set +x 00:22:33.966 [2024-04-26 14:27:15.319076] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:22:33.966 [2024-04-26 14:27:15.319162] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.966 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.966 [2024-04-26 14:27:15.383590] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:33.966 [2024-04-26 14:27:15.499996] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.966 [2024-04-26 14:27:15.500054] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.966 [2024-04-26 14:27:15.500069] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.966 [2024-04-26 14:27:15.500082] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.966 [2024-04-26 14:27:15.500094] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.966 [2024-04-26 14:27:15.500178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.966 [2024-04-26 14:27:15.500235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.966 [2024-04-26 14:27:15.500289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:33.966 [2024-04-26 14:27:15.500292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.901 14:27:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:34.901 14:27:16 -- common/autotest_common.sh@850 -- # return 0 00:22:34.901 14:27:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:34.901 14:27:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:34.901 14:27:16 -- common/autotest_common.sh@10 -- # set +x 00:22:34.901 14:27:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.901 14:27:16 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:22:34.901 14:27:16 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:22:34.901 14:27:16 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:22:34.902 14:27:16 -- scripts/common.sh@309 -- # local bdf bdfs 00:22:34.902 14:27:16 -- scripts/common.sh@310 -- # local nvmes 00:22:34.902 14:27:16 -- scripts/common.sh@312 -- # [[ -n 0000:84:00.0 ]] 00:22:34.902 14:27:16 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:22:34.902 14:27:16 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:22:34.902 14:27:16 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:84:00.0 ]] 00:22:34.902 14:27:16 -- scripts/common.sh@320 -- # uname -s 00:22:34.902 14:27:16 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:22:34.902 14:27:16 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:22:34.902 14:27:16 -- scripts/common.sh@325 -- # (( 1 )) 00:22:34.902 14:27:16 -- scripts/common.sh@326 -- # printf '%s\n' 0000:84:00.0 00:22:34.902 14:27:16 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:22:34.902 14:27:16 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:84:00.0 00:22:34.902 14:27:16 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:22:34.902 14:27:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:34.902 14:27:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:34.902 14:27:16 -- common/autotest_common.sh@10 -- # set +x 00:22:34.902 ************************************ 00:22:34.902 START TEST spdk_target_abort 00:22:34.902 ************************************ 00:22:34.902 14:27:16 -- common/autotest_common.sh@1111 -- # spdk_target 00:22:34.902 14:27:16 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:22:34.902 14:27:16 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:84:00.0 -b spdk_target 00:22:34.902 14:27:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.902 14:27:16 -- common/autotest_common.sh@10 -- # set +x 00:22:38.185 spdk_targetn1 00:22:38.185 14:27:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:38.185 14:27:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.185 14:27:19 -- common/autotest_common.sh@10 -- # set +x 00:22:38.185 [2024-04-26 14:27:19.252568] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.185 14:27:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:22:38.185 14:27:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.185 14:27:19 -- common/autotest_common.sh@10 -- # set +x 00:22:38.185 14:27:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:22:38.185 14:27:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.185 14:27:19 -- common/autotest_common.sh@10 -- # set +x 00:22:38.185 14:27:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:22:38.185 14:27:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.185 14:27:19 -- common/autotest_common.sh@10 -- # set +x 00:22:38.185 [2024-04-26 14:27:19.284810] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.185 14:27:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:38.185 14:27:19 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:38.185 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.468 Initializing NVMe Controllers 00:22:41.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:22:41.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:41.469 Initialization complete. Launching workers. 00:22:41.469 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11190, failed: 0 00:22:41.469 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1220, failed to submit 9970 00:22:41.469 success 719, unsuccess 501, failed 0 00:22:41.469 14:27:22 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:41.469 14:27:22 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:41.469 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.752 Initializing NVMe Controllers 00:22:44.752 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:22:44.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:44.752 Initialization complete. Launching workers. 00:22:44.752 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8469, failed: 0 00:22:44.752 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1278, failed to submit 7191 00:22:44.752 success 315, unsuccess 963, failed 0 00:22:44.752 14:27:25 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:44.752 14:27:25 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:44.752 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.035 Initializing NVMe Controllers 00:22:48.035 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:22:48.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:48.035 Initialization complete. Launching workers. 00:22:48.035 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28615, failed: 0 00:22:48.035 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2605, failed to submit 26010 00:22:48.035 success 302, unsuccess 2303, failed 0 00:22:48.035 14:27:28 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:22:48.035 14:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.035 14:27:28 -- common/autotest_common.sh@10 -- # set +x 00:22:48.035 14:27:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.035 14:27:28 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:22:48.035 14:27:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.035 14:27:28 -- common/autotest_common.sh@10 -- # set +x 00:22:48.970 14:27:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.970 14:27:30 -- target/abort_qd_sizes.sh@61 -- # killprocess 3230138 00:22:48.970 14:27:30 -- common/autotest_common.sh@936 -- # '[' -z 3230138 ']' 00:22:48.970 14:27:30 -- common/autotest_common.sh@940 -- # kill -0 3230138 00:22:48.970 14:27:30 -- common/autotest_common.sh@941 -- # uname 00:22:48.970 14:27:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:48.970 14:27:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3230138 00:22:48.970 14:27:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:48.970 14:27:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:48.970 14:27:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3230138' 00:22:48.970 killing process with pid 3230138 00:22:48.970 14:27:30 -- common/autotest_common.sh@955 -- # kill 3230138 00:22:48.970 14:27:30 -- common/autotest_common.sh@960 -- # wait 3230138 00:22:49.230 00:22:49.230 real 0m14.113s 00:22:49.230 user 0m56.006s 00:22:49.230 sys 0m2.631s 00:22:49.230 14:27:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:49.230 14:27:30 -- common/autotest_common.sh@10 -- # set +x 00:22:49.230 ************************************ 00:22:49.230 END TEST spdk_target_abort 00:22:49.230 ************************************ 00:22:49.230 14:27:30 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:22:49.230 14:27:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:49.230 14:27:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:49.230 14:27:30 -- common/autotest_common.sh@10 -- # set +x 00:22:49.230 ************************************ 00:22:49.230 START TEST kernel_target_abort 00:22:49.230 ************************************ 00:22:49.230 14:27:30 -- common/autotest_common.sh@1111 -- # kernel_target 00:22:49.230 14:27:30 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:22:49.230 14:27:30 -- nvmf/common.sh@717 -- # local ip 00:22:49.230 14:27:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:49.230 14:27:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:49.230 14:27:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.230 14:27:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.230 14:27:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:49.230 14:27:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:49.230 14:27:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:49.230 14:27:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:49.230 14:27:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:49.230 14:27:30 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:49.230 14:27:30 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:49.230 14:27:30 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:22:49.230 14:27:30 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:49.230 14:27:30 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:49.230 14:27:30 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:49.230 14:27:30 -- nvmf/common.sh@628 -- # local block nvme 00:22:49.230 14:27:30 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:22:49.230 14:27:30 -- nvmf/common.sh@631 -- # modprobe nvmet 00:22:49.230 14:27:30 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:49.230 14:27:30 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:50.168 Waiting for block devices as requested 00:22:50.168 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:22:50.168 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:22:50.428 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:22:50.428 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:22:50.428 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:22:50.428 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:22:50.428 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:22:50.688 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:22:50.688 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:22:50.688 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:22:50.946 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:22:50.946 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:22:50.946 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:22:50.946 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:22:51.204 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:22:51.204 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:22:51.205 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:22:51.205 14:27:32 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:51.205 14:27:32 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:51.205 14:27:32 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:22:51.205 14:27:32 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:22:51.205 14:27:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:51.205 14:27:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:51.205 14:27:32 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:22:51.205 14:27:32 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:51.205 14:27:32 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:51.464 No valid GPT data, bailing 00:22:51.464 14:27:32 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:51.464 14:27:32 -- scripts/common.sh@391 -- # pt= 00:22:51.464 14:27:32 -- scripts/common.sh@392 -- # return 1 00:22:51.464 14:27:32 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:22:51.464 14:27:32 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:22:51.464 14:27:32 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:51.464 14:27:32 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:51.464 14:27:32 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:51.464 14:27:32 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:51.464 14:27:32 -- nvmf/common.sh@656 -- # echo 1 00:22:51.464 14:27:32 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:22:51.464 14:27:32 -- nvmf/common.sh@658 -- # echo 1 00:22:51.464 14:27:32 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:22:51.464 14:27:32 -- nvmf/common.sh@661 -- # echo tcp 00:22:51.464 14:27:32 -- nvmf/common.sh@662 -- # echo 4420 00:22:51.464 14:27:32 -- nvmf/common.sh@663 -- # echo ipv4 00:22:51.464 14:27:32 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:51.464 14:27:32 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:22:51.464 00:22:51.464 Discovery Log Number of Records 2, Generation counter 2 00:22:51.464 =====Discovery Log Entry 0====== 00:22:51.464 trtype: tcp 00:22:51.464 adrfam: ipv4 00:22:51.464 subtype: current discovery subsystem 00:22:51.464 treq: not specified, sq flow control disable supported 00:22:51.464 portid: 1 00:22:51.464 trsvcid: 4420 00:22:51.465 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:51.465 traddr: 10.0.0.1 00:22:51.465 eflags: none 00:22:51.465 sectype: none 00:22:51.465 =====Discovery Log Entry 1====== 00:22:51.465 trtype: tcp 00:22:51.465 adrfam: ipv4 00:22:51.465 subtype: nvme subsystem 00:22:51.465 treq: not specified, sq flow control disable supported 00:22:51.465 portid: 1 00:22:51.465 trsvcid: 4420 00:22:51.465 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:51.465 traddr: 10.0.0.1 00:22:51.465 eflags: none 00:22:51.465 sectype: none 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:51.465 14:27:32 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:51.465 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.746 Initializing NVMe Controllers 00:22:54.746 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:54.746 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:54.746 Initialization complete. Launching workers. 00:22:54.746 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 40863, failed: 0 00:22:54.746 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 40863, failed to submit 0 00:22:54.746 success 0, unsuccess 40863, failed 0 00:22:54.746 14:27:35 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:54.746 14:27:35 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:54.746 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.029 Initializing NVMe Controllers 00:22:58.029 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:58.029 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:58.029 Initialization complete. Launching workers. 00:22:58.029 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 74732, failed: 0 00:22:58.029 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18842, failed to submit 55890 00:22:58.029 success 0, unsuccess 18842, failed 0 00:22:58.029 14:27:38 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:58.029 14:27:38 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:58.029 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.562 Initializing NVMe Controllers 00:23:00.562 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:00.562 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:00.562 Initialization complete. Launching workers. 00:23:00.562 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72313, failed: 0 00:23:00.562 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18046, failed to submit 54267 00:23:00.562 success 0, unsuccess 18046, failed 0 00:23:00.562 14:27:42 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:23:00.562 14:27:42 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:00.562 14:27:42 -- nvmf/common.sh@675 -- # echo 0 00:23:00.562 14:27:42 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:00.562 14:27:42 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:00.562 14:27:42 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:00.562 14:27:42 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:00.562 14:27:42 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:23:00.562 14:27:42 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:23:00.562 14:27:42 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:01.499 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:23:01.499 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:23:01.499 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:23:01.499 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:23:01.499 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:23:01.499 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:23:01.499 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:23:01.499 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:23:01.499 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:23:01.499 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:23:01.499 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:23:01.499 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:23:01.499 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:23:01.499 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:23:01.499 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:23:01.499 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:23:02.437 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:23:02.696 00:23:02.696 real 0m13.372s 00:23:02.696 user 0m6.156s 00:23:02.696 sys 0m2.727s 00:23:02.696 14:27:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:02.696 14:27:44 -- common/autotest_common.sh@10 -- # set +x 00:23:02.696 ************************************ 00:23:02.696 END TEST kernel_target_abort 00:23:02.696 ************************************ 00:23:02.696 14:27:44 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:02.696 14:27:44 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:23:02.696 14:27:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:02.696 14:27:44 -- nvmf/common.sh@117 -- # sync 00:23:02.696 14:27:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:02.696 14:27:44 -- nvmf/common.sh@120 -- # set +e 00:23:02.696 14:27:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:02.696 14:27:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:02.696 rmmod nvme_tcp 00:23:02.696 rmmod nvme_fabrics 00:23:02.696 rmmod nvme_keyring 00:23:02.696 14:27:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:02.696 14:27:44 -- nvmf/common.sh@124 -- # set -e 00:23:02.696 14:27:44 -- nvmf/common.sh@125 -- # return 0 00:23:02.696 14:27:44 -- nvmf/common.sh@478 -- # '[' -n 3230138 ']' 00:23:02.696 14:27:44 -- nvmf/common.sh@479 -- # killprocess 3230138 00:23:02.696 14:27:44 -- common/autotest_common.sh@936 -- # '[' -z 3230138 ']' 00:23:02.696 14:27:44 -- common/autotest_common.sh@940 -- # kill -0 3230138 00:23:02.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (3230138) - No such process 00:23:02.696 14:27:44 -- common/autotest_common.sh@963 -- # echo 'Process with pid 3230138 is not found' 00:23:02.696 Process with pid 3230138 is not found 00:23:02.696 14:27:44 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:23:02.696 14:27:44 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:03.630 Waiting for block devices as requested 00:23:03.630 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:23:03.630 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:23:03.630 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:23:03.630 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:23:03.890 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:23:03.890 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:23:03.890 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:23:04.149 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:23:04.149 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:23:04.149 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:23:04.149 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:23:04.409 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:23:04.409 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:23:04.409 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:23:04.409 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:23:04.667 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:23:04.667 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:23:04.667 14:27:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:04.667 14:27:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:04.667 14:27:46 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:04.667 14:27:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:04.667 14:27:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.667 14:27:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:04.667 14:27:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.576 14:27:48 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:06.576 00:23:06.576 real 0m36.624s 00:23:06.576 user 1m4.146s 00:23:06.576 sys 0m8.247s 00:23:06.576 14:27:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:06.576 14:27:48 -- common/autotest_common.sh@10 -- # set +x 00:23:06.576 ************************************ 00:23:06.576 END TEST nvmf_abort_qd_sizes 00:23:06.576 ************************************ 00:23:06.835 14:27:48 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:23:06.835 14:27:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:06.835 14:27:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:06.835 14:27:48 -- common/autotest_common.sh@10 -- # set +x 00:23:06.835 ************************************ 00:23:06.835 START TEST keyring_file 00:23:06.835 ************************************ 00:23:06.835 14:27:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:23:06.835 * Looking for test storage... 00:23:06.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:23:06.835 14:27:48 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:23:06.835 14:27:48 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:06.835 14:27:48 -- nvmf/common.sh@7 -- # uname -s 00:23:06.835 14:27:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.835 14:27:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.835 14:27:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.835 14:27:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.835 14:27:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.835 14:27:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.835 14:27:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.835 14:27:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.835 14:27:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.835 14:27:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.835 14:27:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:06.835 14:27:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:23:06.835 14:27:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.835 14:27:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.835 14:27:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:06.835 14:27:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.835 14:27:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:06.835 14:27:48 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.835 14:27:48 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.835 14:27:48 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.835 14:27:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.835 14:27:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.835 14:27:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.835 14:27:48 -- paths/export.sh@5 -- # export PATH 00:23:06.835 14:27:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.835 14:27:48 -- nvmf/common.sh@47 -- # : 0 00:23:06.835 14:27:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:06.835 14:27:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:06.835 14:27:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.835 14:27:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.835 14:27:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.835 14:27:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:06.835 14:27:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:06.835 14:27:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:06.835 14:27:48 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:06.835 14:27:48 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:06.835 14:27:48 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:06.835 14:27:48 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:23:06.835 14:27:48 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:23:06.835 14:27:48 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:23:06.835 14:27:48 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:06.835 14:27:48 -- keyring/common.sh@15 -- # local name key digest path 00:23:06.835 14:27:48 -- keyring/common.sh@17 -- # name=key0 00:23:06.835 14:27:48 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:06.835 14:27:48 -- keyring/common.sh@17 -- # digest=0 00:23:06.835 14:27:48 -- keyring/common.sh@18 -- # mktemp 00:23:06.835 14:27:48 -- keyring/common.sh@18 -- # path=/tmp/tmp.dXGeRL8Yth 00:23:06.835 14:27:48 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:06.835 14:27:48 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:06.835 14:27:48 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:06.835 14:27:48 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:06.835 14:27:48 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:23:06.835 14:27:48 -- nvmf/common.sh@693 -- # digest=0 00:23:06.835 14:27:48 -- nvmf/common.sh@694 -- # python - 00:23:06.835 14:27:48 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dXGeRL8Yth 00:23:06.835 14:27:48 -- keyring/common.sh@23 -- # echo /tmp/tmp.dXGeRL8Yth 00:23:06.835 14:27:48 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.dXGeRL8Yth 00:23:06.835 14:27:48 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:23:06.835 14:27:48 -- keyring/common.sh@15 -- # local name key digest path 00:23:06.835 14:27:48 -- keyring/common.sh@17 -- # name=key1 00:23:06.835 14:27:48 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:06.835 14:27:48 -- keyring/common.sh@17 -- # digest=0 00:23:06.835 14:27:48 -- keyring/common.sh@18 -- # mktemp 00:23:06.835 14:27:48 -- keyring/common.sh@18 -- # path=/tmp/tmp.pX61KJackr 00:23:06.835 14:27:48 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:06.835 14:27:48 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:06.835 14:27:48 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:06.835 14:27:48 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:06.835 14:27:48 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:23:06.835 14:27:48 -- nvmf/common.sh@693 -- # digest=0 00:23:06.835 14:27:48 -- nvmf/common.sh@694 -- # python - 00:23:07.123 14:27:48 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.pX61KJackr 00:23:07.123 14:27:48 -- keyring/common.sh@23 -- # echo /tmp/tmp.pX61KJackr 00:23:07.123 14:27:48 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.pX61KJackr 00:23:07.123 14:27:48 -- keyring/file.sh@30 -- # tgtpid=3234686 00:23:07.123 14:27:48 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:23:07.123 14:27:48 -- keyring/file.sh@32 -- # waitforlisten 3234686 00:23:07.123 14:27:48 -- common/autotest_common.sh@817 -- # '[' -z 3234686 ']' 00:23:07.123 14:27:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.123 14:27:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:07.123 14:27:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.123 14:27:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:07.123 14:27:48 -- common/autotest_common.sh@10 -- # set +x 00:23:07.123 [2024-04-26 14:27:48.494931] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:23:07.123 [2024-04-26 14:27:48.495022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3234686 ] 00:23:07.123 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.123 [2024-04-26 14:27:48.556697] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.410 [2024-04-26 14:27:48.674533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.410 14:27:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:07.410 14:27:48 -- common/autotest_common.sh@850 -- # return 0 00:23:07.410 14:27:48 -- keyring/file.sh@33 -- # rpc_cmd 00:23:07.410 14:27:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.410 14:27:48 -- common/autotest_common.sh@10 -- # set +x 00:23:07.410 [2024-04-26 14:27:48.910360] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.410 null0 00:23:07.410 [2024-04-26 14:27:48.942411] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:07.410 [2024-04-26 14:27:48.942778] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:07.410 [2024-04-26 14:27:48.950436] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:07.410 14:27:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.410 14:27:48 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:07.410 14:27:48 -- common/autotest_common.sh@638 -- # local es=0 00:23:07.410 14:27:48 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:07.410 14:27:48 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:07.410 14:27:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:07.410 14:27:48 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:07.410 14:27:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:07.410 14:27:48 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:07.410 14:27:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.410 14:27:48 -- common/autotest_common.sh@10 -- # set +x 00:23:07.410 [2024-04-26 14:27:48.962457] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:23:07.410 { 00:23:07.410 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:23:07.410 "secure_channel": false, 00:23:07.410 "listen_address": { 00:23:07.410 "trtype": "tcp", 00:23:07.410 "traddr": "127.0.0.1", 00:23:07.410 "trsvcid": "4420" 00:23:07.410 }, 00:23:07.410 "method": "nvmf_subsystem_add_listener", 00:23:07.410 "req_id": 1 00:23:07.410 } 00:23:07.410 Got JSON-RPC error response 00:23:07.410 response: 00:23:07.410 { 00:23:07.410 "code": -32602, 00:23:07.410 "message": "Invalid parameters" 00:23:07.410 } 00:23:07.410 14:27:48 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:07.410 14:27:48 -- common/autotest_common.sh@641 -- # es=1 00:23:07.410 14:27:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:07.410 14:27:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:07.410 14:27:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:07.410 14:27:48 -- keyring/file.sh@46 -- # bperfpid=3234782 00:23:07.410 14:27:48 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:23:07.410 14:27:48 -- keyring/file.sh@48 -- # waitforlisten 3234782 /var/tmp/bperf.sock 00:23:07.410 14:27:48 -- common/autotest_common.sh@817 -- # '[' -z 3234782 ']' 00:23:07.410 14:27:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:07.410 14:27:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:07.410 14:27:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:07.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:07.410 14:27:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:07.410 14:27:48 -- common/autotest_common.sh@10 -- # set +x 00:23:07.668 [2024-04-26 14:27:49.013193] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:23:07.668 [2024-04-26 14:27:49.013289] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3234782 ] 00:23:07.668 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.668 [2024-04-26 14:27:49.071669] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.668 [2024-04-26 14:27:49.186688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.926 14:27:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:07.926 14:27:49 -- common/autotest_common.sh@850 -- # return 0 00:23:07.926 14:27:49 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dXGeRL8Yth 00:23:07.926 14:27:49 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dXGeRL8Yth 00:23:08.184 14:27:49 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.pX61KJackr 00:23:08.184 14:27:49 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.pX61KJackr 00:23:08.443 14:27:49 -- keyring/file.sh@51 -- # get_key key0 00:23:08.443 14:27:49 -- keyring/file.sh@51 -- # jq -r .path 00:23:08.443 14:27:49 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:08.443 14:27:49 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:08.443 14:27:49 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:08.701 14:27:50 -- keyring/file.sh@51 -- # [[ /tmp/tmp.dXGeRL8Yth == \/\t\m\p\/\t\m\p\.\d\X\G\e\R\L\8\Y\t\h ]] 00:23:08.701 14:27:50 -- keyring/file.sh@52 -- # get_key key1 00:23:08.701 14:27:50 -- keyring/file.sh@52 -- # jq -r .path 00:23:08.701 14:27:50 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:08.701 14:27:50 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:08.701 14:27:50 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:08.959 14:27:50 -- keyring/file.sh@52 -- # [[ /tmp/tmp.pX61KJackr == \/\t\m\p\/\t\m\p\.\p\X\6\1\K\J\a\c\k\r ]] 00:23:08.959 14:27:50 -- keyring/file.sh@53 -- # get_refcnt key0 00:23:08.959 14:27:50 -- keyring/common.sh@12 -- # get_key key0 00:23:08.959 14:27:50 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:08.959 14:27:50 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:08.959 14:27:50 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:08.959 14:27:50 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:08.959 14:27:50 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:23:08.959 14:27:50 -- keyring/file.sh@54 -- # get_refcnt key1 00:23:08.959 14:27:50 -- keyring/common.sh@12 -- # get_key key1 00:23:08.959 14:27:50 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:08.959 14:27:50 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:08.959 14:27:50 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:08.959 14:27:50 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:09.217 14:27:50 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:23:09.217 14:27:50 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:09.217 14:27:50 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:09.475 [2024-04-26 14:27:50.989346] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:09.733 nvme0n1 00:23:09.733 14:27:51 -- keyring/file.sh@59 -- # get_refcnt key0 00:23:09.733 14:27:51 -- keyring/common.sh@12 -- # get_key key0 00:23:09.733 14:27:51 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:09.733 14:27:51 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:09.733 14:27:51 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:09.733 14:27:51 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:09.992 14:27:51 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:23:09.992 14:27:51 -- keyring/file.sh@60 -- # get_refcnt key1 00:23:09.992 14:27:51 -- keyring/common.sh@12 -- # get_key key1 00:23:09.992 14:27:51 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:09.992 14:27:51 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:09.992 14:27:51 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:09.992 14:27:51 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:10.249 14:27:51 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:23:10.249 14:27:51 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:10.249 Running I/O for 1 seconds... 00:23:11.183 00:23:11.183 Latency(us) 00:23:11.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.183 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:23:11.183 nvme0n1 : 1.05 7309.33 28.55 0.00 0.00 16777.97 7233.23 51652.08 00:23:11.183 =================================================================================================================== 00:23:11.183 Total : 7309.33 28.55 0.00 0.00 16777.97 7233.23 51652.08 00:23:11.183 0 00:23:11.183 14:27:52 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:11.183 14:27:52 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:11.748 14:27:53 -- keyring/file.sh@65 -- # get_refcnt key0 00:23:11.748 14:27:53 -- keyring/common.sh@12 -- # get_key key0 00:23:11.748 14:27:53 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:11.748 14:27:53 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:11.748 14:27:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:11.748 14:27:53 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:12.006 14:27:53 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:23:12.006 14:27:53 -- keyring/file.sh@66 -- # get_refcnt key1 00:23:12.006 14:27:53 -- keyring/common.sh@12 -- # get_key key1 00:23:12.006 14:27:53 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:12.006 14:27:53 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:12.006 14:27:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:12.006 14:27:53 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:12.264 14:27:53 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:23:12.264 14:27:53 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:12.264 14:27:53 -- common/autotest_common.sh@638 -- # local es=0 00:23:12.264 14:27:53 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:12.264 14:27:53 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:23:12.264 14:27:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:12.264 14:27:53 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:23:12.264 14:27:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:12.264 14:27:53 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:12.264 14:27:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:12.522 [2024-04-26 14:27:53.902615] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:12.522 [2024-04-26 14:27:53.903185] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x72dce0 (107): Transport endpoint is not connected 00:23:12.522 [2024-04-26 14:27:53.904173] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x72dce0 (9): Bad file descriptor 00:23:12.522 [2024-04-26 14:27:53.905188] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:12.522 [2024-04-26 14:27:53.905210] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:12.522 [2024-04-26 14:27:53.905225] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:12.522 request: 00:23:12.522 { 00:23:12.522 "name": "nvme0", 00:23:12.522 "trtype": "tcp", 00:23:12.522 "traddr": "127.0.0.1", 00:23:12.522 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:12.522 "adrfam": "ipv4", 00:23:12.522 "trsvcid": "4420", 00:23:12.522 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:12.522 "psk": "key1", 00:23:12.522 "method": "bdev_nvme_attach_controller", 00:23:12.522 "req_id": 1 00:23:12.522 } 00:23:12.522 Got JSON-RPC error response 00:23:12.522 response: 00:23:12.522 { 00:23:12.522 "code": -32602, 00:23:12.522 "message": "Invalid parameters" 00:23:12.522 } 00:23:12.522 14:27:53 -- common/autotest_common.sh@641 -- # es=1 00:23:12.522 14:27:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:12.522 14:27:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:12.522 14:27:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:12.522 14:27:53 -- keyring/file.sh@71 -- # get_refcnt key0 00:23:12.522 14:27:53 -- keyring/common.sh@12 -- # get_key key0 00:23:12.522 14:27:53 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:12.522 14:27:53 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:12.522 14:27:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:12.522 14:27:53 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:12.781 14:27:54 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:23:12.781 14:27:54 -- keyring/file.sh@72 -- # get_refcnt key1 00:23:12.781 14:27:54 -- keyring/common.sh@12 -- # get_key key1 00:23:12.781 14:27:54 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:12.781 14:27:54 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:12.781 14:27:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:12.781 14:27:54 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:13.039 14:27:54 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:23:13.039 14:27:54 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:23:13.039 14:27:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:13.297 14:27:54 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:23:13.297 14:27:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:23:13.555 14:27:54 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:23:13.555 14:27:54 -- keyring/file.sh@77 -- # jq length 00:23:13.555 14:27:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:13.812 14:27:55 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:23:13.812 14:27:55 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.dXGeRL8Yth 00:23:13.812 14:27:55 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.dXGeRL8Yth 00:23:13.812 14:27:55 -- common/autotest_common.sh@638 -- # local es=0 00:23:13.812 14:27:55 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.dXGeRL8Yth 00:23:13.812 14:27:55 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:23:13.812 14:27:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:13.812 14:27:55 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:23:13.812 14:27:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:13.812 14:27:55 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dXGeRL8Yth 00:23:13.812 14:27:55 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dXGeRL8Yth 00:23:14.070 [2024-04-26 14:27:55.423171] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.dXGeRL8Yth': 0100660 00:23:14.070 [2024-04-26 14:27:55.423215] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:14.070 request: 00:23:14.070 { 00:23:14.070 "name": "key0", 00:23:14.070 "path": "/tmp/tmp.dXGeRL8Yth", 00:23:14.070 "method": "keyring_file_add_key", 00:23:14.070 "req_id": 1 00:23:14.070 } 00:23:14.070 Got JSON-RPC error response 00:23:14.070 response: 00:23:14.070 { 00:23:14.070 "code": -1, 00:23:14.070 "message": "Operation not permitted" 00:23:14.070 } 00:23:14.070 14:27:55 -- common/autotest_common.sh@641 -- # es=1 00:23:14.070 14:27:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:14.070 14:27:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:14.070 14:27:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:14.070 14:27:55 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.dXGeRL8Yth 00:23:14.070 14:27:55 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dXGeRL8Yth 00:23:14.070 14:27:55 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dXGeRL8Yth 00:23:14.328 14:27:55 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.dXGeRL8Yth 00:23:14.328 14:27:55 -- keyring/file.sh@88 -- # get_refcnt key0 00:23:14.328 14:27:55 -- keyring/common.sh@12 -- # get_key key0 00:23:14.329 14:27:55 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:14.329 14:27:55 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:14.329 14:27:55 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:14.329 14:27:55 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:14.586 14:27:55 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:23:14.586 14:27:55 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:14.586 14:27:55 -- common/autotest_common.sh@638 -- # local es=0 00:23:14.586 14:27:55 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:14.586 14:27:55 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:23:14.586 14:27:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:14.586 14:27:55 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:23:14.586 14:27:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:14.586 14:27:55 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:14.586 14:27:55 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:14.843 [2024-04-26 14:27:56.165155] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.dXGeRL8Yth': No such file or directory 00:23:14.843 [2024-04-26 14:27:56.165205] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:23:14.843 [2024-04-26 14:27:56.165248] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:23:14.843 [2024-04-26 14:27:56.165269] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:14.843 [2024-04-26 14:27:56.165283] bdev_nvme.c:6204:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:23:14.843 request: 00:23:14.843 { 00:23:14.843 "name": "nvme0", 00:23:14.843 "trtype": "tcp", 00:23:14.843 "traddr": "127.0.0.1", 00:23:14.843 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:14.843 "adrfam": "ipv4", 00:23:14.843 "trsvcid": "4420", 00:23:14.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:14.843 "psk": "key0", 00:23:14.843 "method": "bdev_nvme_attach_controller", 00:23:14.843 "req_id": 1 00:23:14.843 } 00:23:14.843 Got JSON-RPC error response 00:23:14.843 response: 00:23:14.843 { 00:23:14.843 "code": -19, 00:23:14.843 "message": "No such device" 00:23:14.843 } 00:23:14.843 14:27:56 -- common/autotest_common.sh@641 -- # es=1 00:23:14.843 14:27:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:14.843 14:27:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:14.843 14:27:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:14.843 14:27:56 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:23:14.843 14:27:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:15.099 14:27:56 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:15.099 14:27:56 -- keyring/common.sh@15 -- # local name key digest path 00:23:15.099 14:27:56 -- keyring/common.sh@17 -- # name=key0 00:23:15.099 14:27:56 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:15.099 14:27:56 -- keyring/common.sh@17 -- # digest=0 00:23:15.099 14:27:56 -- keyring/common.sh@18 -- # mktemp 00:23:15.099 14:27:56 -- keyring/common.sh@18 -- # path=/tmp/tmp.r0zCVqrAPC 00:23:15.099 14:27:56 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:15.099 14:27:56 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:15.099 14:27:56 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:15.099 14:27:56 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:15.099 14:27:56 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:23:15.099 14:27:56 -- nvmf/common.sh@693 -- # digest=0 00:23:15.099 14:27:56 -- nvmf/common.sh@694 -- # python - 00:23:15.099 14:27:56 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.r0zCVqrAPC 00:23:15.099 14:27:56 -- keyring/common.sh@23 -- # echo /tmp/tmp.r0zCVqrAPC 00:23:15.099 14:27:56 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.r0zCVqrAPC 00:23:15.099 14:27:56 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.r0zCVqrAPC 00:23:15.099 14:27:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.r0zCVqrAPC 00:23:15.356 14:27:56 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:15.356 14:27:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:15.614 nvme0n1 00:23:15.614 14:27:57 -- keyring/file.sh@99 -- # get_refcnt key0 00:23:15.614 14:27:57 -- keyring/common.sh@12 -- # get_key key0 00:23:15.614 14:27:57 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:15.614 14:27:57 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:15.614 14:27:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:15.614 14:27:57 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:15.872 14:27:57 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:23:15.872 14:27:57 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:23:15.872 14:27:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:16.130 14:27:57 -- keyring/file.sh@101 -- # get_key key0 00:23:16.130 14:27:57 -- keyring/file.sh@101 -- # jq -r .removed 00:23:16.130 14:27:57 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:16.130 14:27:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:16.130 14:27:57 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:16.388 14:27:57 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:23:16.388 14:27:57 -- keyring/file.sh@102 -- # get_refcnt key0 00:23:16.388 14:27:57 -- keyring/common.sh@12 -- # get_key key0 00:23:16.388 14:27:57 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:16.388 14:27:57 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:16.388 14:27:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:16.388 14:27:57 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:16.646 14:27:57 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:23:16.646 14:27:57 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:16.646 14:27:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:16.935 14:27:58 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:23:16.935 14:27:58 -- keyring/file.sh@104 -- # jq length 00:23:16.935 14:27:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:16.935 14:27:58 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:23:16.935 14:27:58 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.r0zCVqrAPC 00:23:16.935 14:27:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.r0zCVqrAPC 00:23:17.193 14:27:58 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.pX61KJackr 00:23:17.193 14:27:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.pX61KJackr 00:23:17.451 14:27:58 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:17.451 14:27:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:17.707 nvme0n1 00:23:17.964 14:27:59 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:23:17.964 14:27:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:23:18.223 14:27:59 -- keyring/file.sh@112 -- # config='{ 00:23:18.223 "subsystems": [ 00:23:18.223 { 00:23:18.223 "subsystem": "keyring", 00:23:18.223 "config": [ 00:23:18.223 { 00:23:18.223 "method": "keyring_file_add_key", 00:23:18.223 "params": { 00:23:18.223 "name": "key0", 00:23:18.223 "path": "/tmp/tmp.r0zCVqrAPC" 00:23:18.223 } 00:23:18.223 }, 00:23:18.223 { 00:23:18.223 "method": "keyring_file_add_key", 00:23:18.223 "params": { 00:23:18.223 "name": "key1", 00:23:18.223 "path": "/tmp/tmp.pX61KJackr" 00:23:18.223 } 00:23:18.223 } 00:23:18.223 ] 00:23:18.223 }, 00:23:18.223 { 00:23:18.223 "subsystem": "iobuf", 00:23:18.223 "config": [ 00:23:18.223 { 00:23:18.223 "method": "iobuf_set_options", 00:23:18.223 "params": { 00:23:18.223 "small_pool_count": 8192, 00:23:18.223 "large_pool_count": 1024, 00:23:18.223 "small_bufsize": 8192, 00:23:18.223 "large_bufsize": 135168 00:23:18.223 } 00:23:18.223 } 00:23:18.223 ] 00:23:18.223 }, 00:23:18.223 { 00:23:18.223 "subsystem": "sock", 00:23:18.223 "config": [ 00:23:18.223 { 00:23:18.223 "method": "sock_impl_set_options", 00:23:18.223 "params": { 00:23:18.223 "impl_name": "posix", 00:23:18.223 "recv_buf_size": 2097152, 00:23:18.223 "send_buf_size": 2097152, 00:23:18.223 "enable_recv_pipe": true, 00:23:18.223 "enable_quickack": false, 00:23:18.223 "enable_placement_id": 0, 00:23:18.223 "enable_zerocopy_send_server": true, 00:23:18.223 "enable_zerocopy_send_client": false, 00:23:18.223 "zerocopy_threshold": 0, 00:23:18.223 "tls_version": 0, 00:23:18.223 "enable_ktls": false 00:23:18.223 } 00:23:18.223 }, 00:23:18.223 { 00:23:18.223 "method": "sock_impl_set_options", 00:23:18.223 "params": { 00:23:18.223 "impl_name": "ssl", 00:23:18.223 "recv_buf_size": 4096, 00:23:18.223 "send_buf_size": 4096, 00:23:18.223 "enable_recv_pipe": true, 00:23:18.223 "enable_quickack": false, 00:23:18.223 "enable_placement_id": 0, 00:23:18.223 "enable_zerocopy_send_server": true, 00:23:18.223 "enable_zerocopy_send_client": false, 00:23:18.223 "zerocopy_threshold": 0, 00:23:18.223 "tls_version": 0, 00:23:18.223 "enable_ktls": false 00:23:18.223 } 00:23:18.223 } 00:23:18.223 ] 00:23:18.223 }, 00:23:18.223 { 00:23:18.223 "subsystem": "vmd", 00:23:18.223 "config": [] 00:23:18.223 }, 00:23:18.223 { 00:23:18.223 "subsystem": "accel", 00:23:18.223 "config": [ 00:23:18.223 { 00:23:18.223 "method": "accel_set_options", 00:23:18.223 "params": { 00:23:18.223 "small_cache_size": 128, 00:23:18.223 "large_cache_size": 16, 00:23:18.223 "task_count": 2048, 00:23:18.223 "sequence_count": 2048, 00:23:18.223 "buf_count": 2048 00:23:18.223 } 00:23:18.223 } 00:23:18.223 ] 00:23:18.223 }, 00:23:18.223 { 00:23:18.223 "subsystem": "bdev", 00:23:18.223 "config": [ 00:23:18.223 { 00:23:18.223 "method": "bdev_set_options", 00:23:18.223 "params": { 00:23:18.223 "bdev_io_pool_size": 65535, 00:23:18.223 "bdev_io_cache_size": 256, 00:23:18.223 "bdev_auto_examine": true, 00:23:18.223 "iobuf_small_cache_size": 128, 00:23:18.223 "iobuf_large_cache_size": 16 00:23:18.223 } 00:23:18.223 }, 00:23:18.223 { 00:23:18.223 "method": "bdev_raid_set_options", 00:23:18.223 "params": { 00:23:18.223 "process_window_size_kb": 1024 00:23:18.223 } 00:23:18.223 }, 00:23:18.223 { 00:23:18.223 "method": "bdev_iscsi_set_options", 00:23:18.223 "params": { 00:23:18.223 "timeout_sec": 30 00:23:18.223 } 00:23:18.223 }, 00:23:18.223 { 00:23:18.223 "method": "bdev_nvme_set_options", 00:23:18.223 "params": { 00:23:18.223 "action_on_timeout": "none", 00:23:18.223 "timeout_us": 0, 00:23:18.223 "timeout_admin_us": 0, 00:23:18.223 "keep_alive_timeout_ms": 10000, 00:23:18.223 "arbitration_burst": 0, 00:23:18.223 "low_priority_weight": 0, 00:23:18.223 "medium_priority_weight": 0, 00:23:18.223 "high_priority_weight": 0, 00:23:18.223 "nvme_adminq_poll_period_us": 10000, 00:23:18.223 "nvme_ioq_poll_period_us": 0, 00:23:18.223 "io_queue_requests": 512, 00:23:18.223 "delay_cmd_submit": true, 00:23:18.223 "transport_retry_count": 4, 00:23:18.223 "bdev_retry_count": 3, 00:23:18.223 "transport_ack_timeout": 0, 00:23:18.223 "ctrlr_loss_timeout_sec": 0, 00:23:18.223 "reconnect_delay_sec": 0, 00:23:18.223 "fast_io_fail_timeout_sec": 0, 00:23:18.223 "disable_auto_failback": false, 00:23:18.223 "generate_uuids": false, 00:23:18.223 "transport_tos": 0, 00:23:18.223 "nvme_error_stat": false, 00:23:18.223 "rdma_srq_size": 0, 00:23:18.223 "io_path_stat": false, 00:23:18.223 "allow_accel_sequence": false, 00:23:18.223 "rdma_max_cq_size": 0, 00:23:18.223 "rdma_cm_event_timeout_ms": 0, 00:23:18.223 "dhchap_digests": [ 00:23:18.223 "sha256", 00:23:18.223 "sha384", 00:23:18.223 "sha512" 00:23:18.223 ], 00:23:18.223 "dhchap_dhgroups": [ 00:23:18.223 "null", 00:23:18.223 "ffdhe2048", 00:23:18.223 "ffdhe3072", 00:23:18.223 "ffdhe4096", 00:23:18.223 "ffdhe6144", 00:23:18.223 "ffdhe8192" 00:23:18.223 ] 00:23:18.223 } 00:23:18.223 }, 00:23:18.223 { 00:23:18.223 "method": "bdev_nvme_attach_controller", 00:23:18.223 "params": { 00:23:18.223 "name": "nvme0", 00:23:18.223 "trtype": "TCP", 00:23:18.223 "adrfam": "IPv4", 00:23:18.223 "traddr": "127.0.0.1", 00:23:18.223 "trsvcid": "4420", 00:23:18.223 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:18.223 "prchk_reftag": false, 00:23:18.223 "prchk_guard": false, 00:23:18.223 "ctrlr_loss_timeout_sec": 0, 00:23:18.223 "reconnect_delay_sec": 0, 00:23:18.223 "fast_io_fail_timeout_sec": 0, 00:23:18.223 "psk": "key0", 00:23:18.223 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:18.223 "hdgst": false, 00:23:18.223 "ddgst": false 00:23:18.223 } 00:23:18.223 }, 00:23:18.223 { 00:23:18.223 "method": "bdev_nvme_set_hotplug", 00:23:18.223 "params": { 00:23:18.223 "period_us": 100000, 00:23:18.223 "enable": false 00:23:18.223 } 00:23:18.223 }, 00:23:18.223 { 00:23:18.223 "method": "bdev_wait_for_examine" 00:23:18.223 } 00:23:18.223 ] 00:23:18.224 }, 00:23:18.224 { 00:23:18.224 "subsystem": "nbd", 00:23:18.224 "config": [] 00:23:18.224 } 00:23:18.224 ] 00:23:18.224 }' 00:23:18.224 14:27:59 -- keyring/file.sh@114 -- # killprocess 3234782 00:23:18.224 14:27:59 -- common/autotest_common.sh@936 -- # '[' -z 3234782 ']' 00:23:18.224 14:27:59 -- common/autotest_common.sh@940 -- # kill -0 3234782 00:23:18.224 14:27:59 -- common/autotest_common.sh@941 -- # uname 00:23:18.224 14:27:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:18.224 14:27:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3234782 00:23:18.224 14:27:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:18.224 14:27:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:18.224 14:27:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3234782' 00:23:18.224 killing process with pid 3234782 00:23:18.224 14:27:59 -- common/autotest_common.sh@955 -- # kill 3234782 00:23:18.224 Received shutdown signal, test time was about 1.000000 seconds 00:23:18.224 00:23:18.224 Latency(us) 00:23:18.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.224 =================================================================================================================== 00:23:18.224 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.224 14:27:59 -- common/autotest_common.sh@960 -- # wait 3234782 00:23:18.482 14:27:59 -- keyring/file.sh@117 -- # bperfpid=3235923 00:23:18.482 14:27:59 -- keyring/file.sh@119 -- # waitforlisten 3235923 /var/tmp/bperf.sock 00:23:18.482 14:27:59 -- common/autotest_common.sh@817 -- # '[' -z 3235923 ']' 00:23:18.482 14:27:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:18.482 14:27:59 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:23:18.482 14:27:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:18.482 14:27:59 -- keyring/file.sh@115 -- # echo '{ 00:23:18.482 "subsystems": [ 00:23:18.482 { 00:23:18.482 "subsystem": "keyring", 00:23:18.482 "config": [ 00:23:18.482 { 00:23:18.482 "method": "keyring_file_add_key", 00:23:18.482 "params": { 00:23:18.482 "name": "key0", 00:23:18.482 "path": "/tmp/tmp.r0zCVqrAPC" 00:23:18.482 } 00:23:18.482 }, 00:23:18.482 { 00:23:18.482 "method": "keyring_file_add_key", 00:23:18.482 "params": { 00:23:18.482 "name": "key1", 00:23:18.482 "path": "/tmp/tmp.pX61KJackr" 00:23:18.482 } 00:23:18.482 } 00:23:18.482 ] 00:23:18.482 }, 00:23:18.482 { 00:23:18.482 "subsystem": "iobuf", 00:23:18.482 "config": [ 00:23:18.482 { 00:23:18.482 "method": "iobuf_set_options", 00:23:18.482 "params": { 00:23:18.482 "small_pool_count": 8192, 00:23:18.482 "large_pool_count": 1024, 00:23:18.482 "small_bufsize": 8192, 00:23:18.482 "large_bufsize": 135168 00:23:18.482 } 00:23:18.482 } 00:23:18.482 ] 00:23:18.482 }, 00:23:18.482 { 00:23:18.482 "subsystem": "sock", 00:23:18.482 "config": [ 00:23:18.482 { 00:23:18.482 "method": "sock_impl_set_options", 00:23:18.482 "params": { 00:23:18.482 "impl_name": "posix", 00:23:18.482 "recv_buf_size": 2097152, 00:23:18.482 "send_buf_size": 2097152, 00:23:18.482 "enable_recv_pipe": true, 00:23:18.482 "enable_quickack": false, 00:23:18.482 "enable_placement_id": 0, 00:23:18.482 "enable_zerocopy_send_server": true, 00:23:18.482 "enable_zerocopy_send_client": false, 00:23:18.482 "zerocopy_threshold": 0, 00:23:18.482 "tls_version": 0, 00:23:18.482 "enable_ktls": false 00:23:18.482 } 00:23:18.482 }, 00:23:18.482 { 00:23:18.482 "method": "sock_impl_set_options", 00:23:18.482 "params": { 00:23:18.482 "impl_name": "ssl", 00:23:18.482 "recv_buf_size": 4096, 00:23:18.482 "send_buf_size": 4096, 00:23:18.482 "enable_recv_pipe": true, 00:23:18.482 "enable_quickack": false, 00:23:18.482 "enable_placement_id": 0, 00:23:18.482 "enable_zerocopy_send_server": true, 00:23:18.482 "enable_zerocopy_send_client": false, 00:23:18.482 "zerocopy_threshold": 0, 00:23:18.482 "tls_version": 0, 00:23:18.482 "enable_ktls": false 00:23:18.482 } 00:23:18.482 } 00:23:18.482 ] 00:23:18.482 }, 00:23:18.482 { 00:23:18.482 "subsystem": "vmd", 00:23:18.482 "config": [] 00:23:18.482 }, 00:23:18.482 { 00:23:18.482 "subsystem": "accel", 00:23:18.482 "config": [ 00:23:18.482 { 00:23:18.482 "method": "accel_set_options", 00:23:18.482 "params": { 00:23:18.482 "small_cache_size": 128, 00:23:18.482 "large_cache_size": 16, 00:23:18.482 "task_count": 2048, 00:23:18.482 "sequence_count": 2048, 00:23:18.482 "buf_count": 2048 00:23:18.482 } 00:23:18.482 } 00:23:18.482 ] 00:23:18.482 }, 00:23:18.482 { 00:23:18.482 "subsystem": "bdev", 00:23:18.482 "config": [ 00:23:18.482 { 00:23:18.482 "method": "bdev_set_options", 00:23:18.482 "params": { 00:23:18.482 "bdev_io_pool_size": 65535, 00:23:18.482 "bdev_io_cache_size": 256, 00:23:18.482 "bdev_auto_examine": true, 00:23:18.482 "iobuf_small_cache_size": 128, 00:23:18.482 "iobuf_large_cache_size": 16 00:23:18.482 } 00:23:18.482 }, 00:23:18.482 { 00:23:18.482 "method": "bdev_raid_set_options", 00:23:18.482 "params": { 00:23:18.482 "process_window_size_kb": 1024 00:23:18.482 } 00:23:18.482 }, 00:23:18.482 { 00:23:18.482 "method": "bdev_iscsi_set_options", 00:23:18.482 "params": { 00:23:18.482 "timeout_sec": 30 00:23:18.482 } 00:23:18.482 }, 00:23:18.482 { 00:23:18.482 "method": "bdev_nvme_set_options", 00:23:18.482 "params": { 00:23:18.482 "action_on_timeout": "none", 00:23:18.482 "timeout_us": 0, 00:23:18.482 "timeout_admin_us": 0, 00:23:18.482 "keep_alive_timeout_ms": 10000, 00:23:18.482 "arbitration_burst": 0, 00:23:18.482 "low_priority_weight": 0, 00:23:18.482 "medium_priority_weight": 0, 00:23:18.482 "high_priority_weight": 0, 00:23:18.482 "nvme_adminq_poll_period_us": 10000, 00:23:18.482 "nvme_ioq_poll_period_us": 0, 00:23:18.482 "io_queue_requests": 512, 00:23:18.482 "delay_cmd_submit": true, 00:23:18.482 "transport_retry_count": 4, 00:23:18.482 "bdev_retry_count": 3, 00:23:18.482 "transport_ack_timeout": 0, 00:23:18.482 "ctrlr_loss_timeout_sec": 0, 00:23:18.482 "reconnect_delay_sec": 0, 00:23:18.482 "fast_io_fail_timeout_sec": 0, 00:23:18.482 "disable_auto_failback": false, 00:23:18.482 "generate_uuids": false, 00:23:18.482 "transport_tos": 0, 00:23:18.482 "nvme_error_stat": false, 00:23:18.482 "rdma_srq_size": 0, 00:23:18.482 "io_path_stat": false, 00:23:18.482 "allow_accel_sequence": false, 00:23:18.482 "rdma_max_cq_size": 0, 00:23:18.482 "rdma_cm_event_timeout_ms": 0, 00:23:18.482 "dhchap_digests": [ 00:23:18.482 "sha256", 00:23:18.482 "sha384 14:27:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:18.482 ", 00:23:18.482 "sha512" 00:23:18.482 ], 00:23:18.482 "dhchap_dhgroups": [ 00:23:18.482 "null", 00:23:18.482 "ffdhe2048", 00:23:18.482 "ffdhe3072", 00:23:18.482 "ffdhe4096", 00:23:18.482 "ffdhe6144", 00:23:18.482 "ffdhe8192" 00:23:18.482 ] 00:23:18.482 } 00:23:18.482 }, 00:23:18.482 { 00:23:18.482 "method": "bdev_nvme_attach_controller", 00:23:18.482 "params": { 00:23:18.482 "name": "nvme0", 00:23:18.482 "trtype": "TCP", 00:23:18.482 "adrfam": "IPv4", 00:23:18.482 "traddr": "127.0.0.1", 00:23:18.482 "trsvcid": "4420", 00:23:18.482 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:18.482 "prchk_reftag": false, 00:23:18.482 "prchk_guard": false, 00:23:18.482 "ctrlr_loss_timeout_sec": 0, 00:23:18.482 "reconnect_delay_sec": 0, 00:23:18.482 "fast_io_fail_timeout_sec": 0, 00:23:18.482 "psk": "key0", 00:23:18.482 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:18.482 "hdgst": false, 00:23:18.482 "ddgst": false 00:23:18.482 } 00:23:18.482 }, 00:23:18.482 { 00:23:18.482 "method": "bdev_nvme_set_hotplug", 00:23:18.482 "params": { 00:23:18.482 "period_us": 100000, 00:23:18.482 "enable": false 00:23:18.482 } 00:23:18.482 }, 00:23:18.482 { 00:23:18.482 "method": "bdev_wait_for_examine" 00:23:18.482 } 00:23:18.482 ] 00:23:18.482 }, 00:23:18.482 { 00:23:18.482 "subsystem": "nbd", 00:23:18.482 "config": [] 00:23:18.482 } 00:23:18.482 ] 00:23:18.482 }' 00:23:18.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:18.482 14:27:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:18.482 14:27:59 -- common/autotest_common.sh@10 -- # set +x 00:23:18.482 [2024-04-26 14:27:59.885212] Starting SPDK v24.05-pre git sha1 7f48663af / DPDK 23.11.0 initialization... 00:23:18.482 [2024-04-26 14:27:59.885315] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3235923 ] 00:23:18.482 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.482 [2024-04-26 14:27:59.944698] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.740 [2024-04-26 14:28:00.064469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.740 [2024-04-26 14:28:00.234992] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.997 14:28:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:18.997 14:28:00 -- common/autotest_common.sh@850 -- # return 0 00:23:18.997 14:28:00 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:23:18.997 14:28:00 -- keyring/file.sh@120 -- # jq length 00:23:18.997 14:28:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:19.255 14:28:00 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:23:19.255 14:28:00 -- keyring/file.sh@121 -- # get_refcnt key0 00:23:19.255 14:28:00 -- keyring/common.sh@12 -- # get_key key0 00:23:19.255 14:28:00 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:19.255 14:28:00 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:19.255 14:28:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:19.255 14:28:00 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:19.512 14:28:00 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:23:19.512 14:28:00 -- keyring/file.sh@122 -- # get_refcnt key1 00:23:19.512 14:28:00 -- keyring/common.sh@12 -- # get_key key1 00:23:19.512 14:28:00 -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:19.512 14:28:00 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:19.512 14:28:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:19.512 14:28:00 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:19.769 14:28:01 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:23:19.769 14:28:01 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:23:19.769 14:28:01 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:23:19.769 14:28:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:23:20.028 14:28:01 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:23:20.028 14:28:01 -- keyring/file.sh@1 -- # cleanup 00:23:20.028 14:28:01 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.r0zCVqrAPC /tmp/tmp.pX61KJackr 00:23:20.028 14:28:01 -- keyring/file.sh@20 -- # killprocess 3235923 00:23:20.028 14:28:01 -- common/autotest_common.sh@936 -- # '[' -z 3235923 ']' 00:23:20.028 14:28:01 -- common/autotest_common.sh@940 -- # kill -0 3235923 00:23:20.028 14:28:01 -- common/autotest_common.sh@941 -- # uname 00:23:20.028 14:28:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:20.028 14:28:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3235923 00:23:20.028 14:28:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:20.028 14:28:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:20.028 14:28:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3235923' 00:23:20.028 killing process with pid 3235923 00:23:20.028 14:28:01 -- common/autotest_common.sh@955 -- # kill 3235923 00:23:20.028 Received shutdown signal, test time was about 1.000000 seconds 00:23:20.028 00:23:20.028 Latency(us) 00:23:20.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.028 =================================================================================================================== 00:23:20.028 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:20.028 14:28:01 -- common/autotest_common.sh@960 -- # wait 3235923 00:23:20.286 14:28:01 -- keyring/file.sh@21 -- # killprocess 3234686 00:23:20.286 14:28:01 -- common/autotest_common.sh@936 -- # '[' -z 3234686 ']' 00:23:20.286 14:28:01 -- common/autotest_common.sh@940 -- # kill -0 3234686 00:23:20.286 14:28:01 -- common/autotest_common.sh@941 -- # uname 00:23:20.286 14:28:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:20.286 14:28:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 3234686 00:23:20.286 14:28:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:20.286 14:28:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:20.286 14:28:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 3234686' 00:23:20.286 killing process with pid 3234686 00:23:20.286 14:28:01 -- common/autotest_common.sh@955 -- # kill 3234686 00:23:20.286 [2024-04-26 14:28:01.792414] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:20.286 14:28:01 -- common/autotest_common.sh@960 -- # wait 3234686 00:23:20.852 00:23:20.852 real 0m13.858s 00:23:20.852 user 0m35.081s 00:23:20.852 sys 0m3.112s 00:23:20.852 14:28:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:20.852 14:28:02 -- common/autotest_common.sh@10 -- # set +x 00:23:20.852 ************************************ 00:23:20.852 END TEST keyring_file 00:23:20.852 ************************************ 00:23:20.852 14:28:02 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:23:20.852 14:28:02 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:23:20.852 14:28:02 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:23:20.852 14:28:02 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:23:20.852 14:28:02 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:23:20.852 14:28:02 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:23:20.852 14:28:02 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:23:20.852 14:28:02 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:23:20.852 14:28:02 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:23:20.852 14:28:02 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:23:20.852 14:28:02 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:23:20.852 14:28:02 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:23:20.852 14:28:02 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:23:20.852 14:28:02 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:23:20.852 14:28:02 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:23:20.852 14:28:02 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:23:20.852 14:28:02 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:23:20.852 14:28:02 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:23:20.852 14:28:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:20.852 14:28:02 -- common/autotest_common.sh@10 -- # set +x 00:23:20.852 14:28:02 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:23:20.852 14:28:02 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:23:20.852 14:28:02 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:23:20.852 14:28:02 -- common/autotest_common.sh@10 -- # set +x 00:23:22.228 INFO: APP EXITING 00:23:22.228 INFO: killing all VMs 00:23:22.228 INFO: killing vhost app 00:23:22.228 WARN: no vhost pid file found 00:23:22.228 INFO: EXIT DONE 00:23:23.165 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:23:23.165 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:23:23.165 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:23:23.165 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:23:23.165 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:23:23.165 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:23:23.165 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:23:23.165 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:23:23.165 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:23:23.165 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:23:23.165 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:23:23.165 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:23:23.165 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:23:23.165 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:23:23.165 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:23:23.165 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:23:23.165 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:23:24.105 Cleaning 00:23:24.105 Removing: /var/run/dpdk/spdk0/config 00:23:24.105 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:24.105 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:24.105 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:24.105 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:24.105 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:23:24.105 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:23:24.105 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:23:24.105 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:23:24.105 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:24.105 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:24.105 Removing: /var/run/dpdk/spdk1/config 00:23:24.105 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:24.105 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:24.105 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:24.105 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:24.105 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:23:24.105 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:23:24.105 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:23:24.105 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:23:24.105 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:24.105 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:24.105 Removing: /var/run/dpdk/spdk1/mp_socket 00:23:24.105 Removing: /var/run/dpdk/spdk2/config 00:23:24.105 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:24.105 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:24.105 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:24.105 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:24.105 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:23:24.364 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:23:24.364 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:23:24.364 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:23:24.364 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:24.364 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:24.364 Removing: /var/run/dpdk/spdk3/config 00:23:24.364 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:24.364 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:24.364 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:24.364 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:24.364 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:23:24.364 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:23:24.364 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:23:24.364 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:23:24.364 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:24.364 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:24.364 Removing: /var/run/dpdk/spdk4/config 00:23:24.364 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:24.364 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:24.364 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:24.364 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:24.364 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:23:24.364 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:23:24.364 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:23:24.364 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:23:24.364 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:24.364 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:24.364 Removing: /dev/shm/bdev_svc_trace.1 00:23:24.364 Removing: /dev/shm/nvmf_trace.0 00:23:24.364 Removing: /dev/shm/spdk_tgt_trace.pid3057639 00:23:24.364 Removing: /var/run/dpdk/spdk0 00:23:24.364 Removing: /var/run/dpdk/spdk1 00:23:24.364 Removing: /var/run/dpdk/spdk2 00:23:24.364 Removing: /var/run/dpdk/spdk3 00:23:24.364 Removing: /var/run/dpdk/spdk4 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3056276 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3056872 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3057639 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3058147 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3058641 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3058712 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3059363 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3059379 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3059604 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3060636 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3061473 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3061726 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3061887 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3062261 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3062857 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3062995 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3063123 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3063376 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3063835 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3065823 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3066016 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3066152 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3066162 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3066502 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3066518 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3066854 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3066939 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3067093 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3067117 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3067323 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3067340 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3067742 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3067879 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3068054 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3068203 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3068319 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3068459 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3068630 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3068761 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3068976 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3069108 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3069271 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3069460 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3069593 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3069809 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3069948 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3070083 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3070295 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3070429 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3070645 00:23:24.364 Removing: /var/run/dpdk/spdk_pid3070776 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3070917 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3071131 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3071268 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3071489 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3071620 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3071812 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3071916 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3072200 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3073842 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3094364 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3096387 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3100899 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3103446 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3105269 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3105581 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3111248 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3111333 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3111746 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3112267 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3112761 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3113065 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3113073 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3113263 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3113369 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3113372 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3113874 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3114349 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3114783 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3115095 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3115180 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3115285 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3116192 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3117287 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3121476 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3121690 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3123656 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3126609 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3128281 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3133148 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3137175 00:23:24.623 Removing: /var/run/dpdk/spdk_pid3138171 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3138675 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3147231 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3148863 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3151032 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3151928 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3152932 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3153036 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3153140 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3153242 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3153581 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3154586 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3155161 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3155491 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3156732 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3157061 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3157497 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3159366 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3163794 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3165861 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3168840 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3169801 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3171356 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3173356 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3175103 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3178557 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3178565 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3180717 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3180883 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3181008 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3181213 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3181218 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3183166 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3183502 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3185479 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3186979 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3189621 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3192218 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3195610 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3195612 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3205924 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3206246 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3206646 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3206958 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3207418 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3207774 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3208135 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3208453 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3210392 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3210591 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3213522 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3213581 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3214927 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3218812 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3218817 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3221090 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3222180 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3223323 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3223978 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3224971 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3225646 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3230478 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3230774 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3231076 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3232270 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3232525 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3232824 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3234686 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3234782 00:23:24.624 Removing: /var/run/dpdk/spdk_pid3235923 00:23:24.624 Clean 00:23:24.883 14:28:06 -- common/autotest_common.sh@1437 -- # return 0 00:23:24.883 14:28:06 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:23:24.883 14:28:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:24.883 14:28:06 -- common/autotest_common.sh@10 -- # set +x 00:23:24.883 14:28:06 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:23:24.883 14:28:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:24.883 14:28:06 -- common/autotest_common.sh@10 -- # set +x 00:23:24.883 14:28:06 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:23:24.883 14:28:06 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:23:24.883 14:28:06 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:23:24.883 14:28:06 -- spdk/autotest.sh@389 -- # hash lcov 00:23:24.883 14:28:06 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:23:24.883 14:28:06 -- spdk/autotest.sh@391 -- # hostname 00:23:24.883 14:28:06 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-02 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:23:25.142 geninfo: WARNING: invalid characters removed from testname! 00:23:57.207 14:28:33 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:23:57.207 14:28:37 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:00.489 14:28:41 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:03.022 14:28:44 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:06.307 14:28:47 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:08.846 14:28:50 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:24:11.383 14:28:52 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:11.642 14:28:52 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.642 14:28:52 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:24:11.642 14:28:52 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.642 14:28:52 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.642 14:28:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.642 14:28:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.642 14:28:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.642 14:28:52 -- paths/export.sh@5 -- $ export PATH 00:24:11.642 14:28:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.642 14:28:52 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:24:11.642 14:28:52 -- common/autobuild_common.sh@435 -- $ date +%s 00:24:11.642 14:28:52 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714134532.XXXXXX 00:24:11.642 14:28:52 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714134532.zdJi3n 00:24:11.642 14:28:52 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:24:11.642 14:28:52 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:24:11.642 14:28:52 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:24:11.642 14:28:52 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:24:11.642 14:28:52 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:24:11.642 14:28:52 -- common/autobuild_common.sh@451 -- $ get_config_params 00:24:11.642 14:28:52 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:24:11.642 14:28:52 -- common/autotest_common.sh@10 -- $ set +x 00:24:11.642 14:28:52 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:24:11.642 14:28:52 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:24:11.642 14:28:52 -- pm/common@17 -- $ local monitor 00:24:11.642 14:28:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:11.642 14:28:52 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3243525 00:24:11.642 14:28:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:11.642 14:28:52 -- pm/common@21 -- $ date +%s 00:24:11.642 14:28:52 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3243527 00:24:11.642 14:28:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:11.642 14:28:52 -- pm/common@21 -- $ date +%s 00:24:11.642 14:28:52 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3243530 00:24:11.642 14:28:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:11.642 14:28:52 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=3243533 00:24:11.642 14:28:52 -- pm/common@21 -- $ date +%s 00:24:11.642 14:28:52 -- pm/common@26 -- $ sleep 1 00:24:11.642 14:28:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714134533 00:24:11.643 14:28:53 -- pm/common@21 -- $ date +%s 00:24:11.643 14:28:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714134533 00:24:11.643 14:28:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714134533 00:24:11.643 14:28:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714134533 00:24:11.643 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714134533_collect-vmstat.pm.log 00:24:11.643 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714134533_collect-cpu-load.pm.log 00:24:11.643 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714134533_collect-cpu-temp.pm.log 00:24:11.643 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714134533_collect-bmc-pm.bmc.pm.log 00:24:12.607 14:28:54 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:24:12.607 14:28:54 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j32 00:24:12.607 14:28:54 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:12.607 14:28:54 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:24:12.607 14:28:54 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:24:12.607 14:28:54 -- spdk/autopackage.sh@19 -- $ timing_finish 00:24:12.608 14:28:54 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:12.608 14:28:54 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:24:12.608 14:28:54 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:24:12.608 14:28:54 -- spdk/autopackage.sh@20 -- $ exit 0 00:24:12.608 14:28:54 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:24:12.608 14:28:54 -- pm/common@30 -- $ signal_monitor_resources TERM 00:24:12.608 14:28:54 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:24:12.608 14:28:54 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:12.608 14:28:54 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:24:12.608 14:28:54 -- pm/common@45 -- $ pid=3243542 00:24:12.608 14:28:54 -- pm/common@52 -- $ sudo kill -TERM 3243542 00:24:12.608 14:28:54 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:12.608 14:28:54 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:24:12.608 14:28:54 -- pm/common@45 -- $ pid=3243544 00:24:12.608 14:28:54 -- pm/common@52 -- $ sudo kill -TERM 3243544 00:24:12.608 14:28:54 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:12.608 14:28:54 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:24:12.608 14:28:54 -- pm/common@45 -- $ pid=3243543 00:24:12.608 14:28:54 -- pm/common@52 -- $ sudo kill -TERM 3243543 00:24:12.608 14:28:54 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:12.608 14:28:54 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:24:12.608 14:28:54 -- pm/common@45 -- $ pid=3243546 00:24:12.608 14:28:54 -- pm/common@52 -- $ sudo kill -TERM 3243546 00:24:12.895 + [[ -n 2980194 ]] 00:24:12.895 + sudo kill 2980194 00:24:12.906 [Pipeline] } 00:24:12.924 [Pipeline] // stage 00:24:12.930 [Pipeline] } 00:24:12.950 [Pipeline] // timeout 00:24:12.955 [Pipeline] } 00:24:12.975 [Pipeline] // catchError 00:24:12.980 [Pipeline] } 00:24:12.998 [Pipeline] // wrap 00:24:13.007 [Pipeline] } 00:24:13.022 [Pipeline] // catchError 00:24:13.032 [Pipeline] stage 00:24:13.035 [Pipeline] { (Epilogue) 00:24:13.051 [Pipeline] catchError 00:24:13.052 [Pipeline] { 00:24:13.066 [Pipeline] echo 00:24:13.069 Cleanup processes 00:24:13.076 [Pipeline] sh 00:24:13.360 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:13.360 3243695 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:24:13.360 3243763 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:13.373 [Pipeline] sh 00:24:13.651 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:24:13.652 ++ grep -v 'sudo pgrep' 00:24:13.652 ++ awk '{print $1}' 00:24:13.652 + sudo kill -9 3243695 00:24:13.663 [Pipeline] sh 00:24:13.944 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:24:23.928 [Pipeline] sh 00:24:24.213 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:24:24.213 Artifacts sizes are good 00:24:24.229 [Pipeline] archiveArtifacts 00:24:24.236 Archiving artifacts 00:24:24.419 [Pipeline] sh 00:24:24.702 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:24:24.717 [Pipeline] cleanWs 00:24:24.727 [WS-CLEANUP] Deleting project workspace... 00:24:24.727 [WS-CLEANUP] Deferred wipeout is used... 00:24:24.734 [WS-CLEANUP] done 00:24:24.736 [Pipeline] } 00:24:24.756 [Pipeline] // catchError 00:24:24.769 [Pipeline] sh 00:24:25.048 + logger -p user.info -t JENKINS-CI 00:24:25.057 [Pipeline] } 00:24:25.072 [Pipeline] // stage 00:24:25.077 [Pipeline] } 00:24:25.094 [Pipeline] // node 00:24:25.100 [Pipeline] End of Pipeline 00:24:25.136 Finished: SUCCESS